Another close up 127mm triplet image (an incredibly detailed 11 hours) compared to a CDK14 image (36 hours or so). The 127mm looks pretty cool on Facebook, but even then, how could you feel okay doing this to the detail in your data (assuming this is a result of excessive AI sharpening)? I guess to give an ‘impression’ of sharpness or detail at a certain scale? Is that a thing??! Maybe I’m just a bit too in the realist camp at present...
Another close up 127mm triplet image (an incredibly detailed 11 hours) compared to a CDK14 image (36 hours or so). The 127mm looks pretty cool on Facebook, but even then, how could you feel okay doing this to the detail in your data (assuming this is a result of excessive AI sharpening)? I guess to give an ‘impression’ of sharpness or detail at a certain scale? Is that a thing??! Maybe I’m just a bit too in the realist camp at present...
I don't see anything wrong with sharpening if it works. When it failed for me it usually accentuated noise or wrecked the stars.
The CDK data must have been under terrible seeing as there is no way a 127 low end refractor is going to compete with a properly collimated and decent seeing CDK14.
Unless it was lots of short sub lucky imaging shots using a CMOS camera versus long exposure CCD.
Longer focal length has the achilles heel of requiring better seeing.
Thanks for your thoughts on this Greg as it helps clarify some ideas :-) The object shown in my example is CTB-1 (very faint) and no image whether 36 hours with a CDK14 or 52 hours with a 30cm Newt has anything like the 127mm detail, which I believe is largely generated by some kind of AI sharpening (could be wrong). The stars in the image I believe (again could be wrong) aren’t affected by the sharpening as the sharpening is probably done on a starless image with the stars added back in later. However even a CDK14 or 30cm newt can be poor no matter how many hours I guess. Also, to be clear, I’m not having a go at anyone, as the Facebook version on my iPhone (which is how some would consume their images) is pretty cool and the examples I used are from full res screen shots on Astrobin and aren’t really visible with various compression algorithms. Also there are other many images turning up that are very similarly sharpened although easier to detect artifacts as a Hubble comparison is available.
It was more that sharp dark ‘pillar’ and the other ‘features’ in the 127mm image (the first one) surrounded by the smoothed out background that suggests some kind of selective AI texturing to me. The noise profile of the CDK image (the second one) seems more natural and doesn’t feel like sharpening is interpolating additional detail. In all images I’ve seen the ‘pillar’ or dark bump is never defined in regard to a sharp edge let alone that jagged top with the bright linear features next to it. Again at a lower resolution the 127mm pillar is a sharp ‘something’ but close up it looks like a jagged tower rising through the mist against a blood red sky. And perhaps that’s the dilemma here - what I think is oversharpened and interpolated detail could be more dramatically suggestive, even if it isn’t the actual product of a star going supernova (which at a micro level on a very faint object could be quite regular and not very suggestive of anything). Not sure there is a Hubble version of this object to check against which is why I thought a deep CDK14 version might suffice, although as you point out Greg, these bigger scopes do have their issues. Anyway, enough from me. Thinking about it too much when I should be trying to solve my own issues! Clear skies!
The CDK stars ones look much more natural than the stars in that 127mm image to me, those look very much like what I saw when I gave the AI sharpening routine a try. At a very low level of sharpening they lost the gradual fading that comes out of normal seeing and became hard, flat blobs, I found it a very unattractive look, it also produced artifacts very quickly and not just sharpening worms in the background areas. A few small patterns of stars suddenly looked like someone had dropped text X's on the image, not nice at all. I also do not find the sharpening result on the nebulosity to look very natural. It would be interesting to do an integration with significantly more more or less data and run the AI sharpen on both to see if it produces the same "Detail" both times.
Similarly the AI denoise just did not produce a natural looking image with my data sets so I uninstalled both in short order.
Becomes a little less like photography and a little more like painting. That image is not so much sharpened as enhanced with detail which may be assumed from other examples but may not really be present.
It is always the unexpected details which are more interesting than the expected assumptions.
I would actually take the statement a step further than that. It is not really even assumed detail or expected detail, I found when the sharpening was pushed beyond a very small amount it produced detail that really does not exist. It is more "predicted" detail than anything else.
I tried the AI sharpen on a scanned picture of a car I had many years ago and I have to say it failed horribly. It is an older car with a rubber seal fitting the windscreen rather than the bonded glass common since the early 80's, any amount of "Sharpening" at all and the paint on the A pillar was extended on to the black rubber of the windscreen seal, and other body styling lines suddenly appeared (I am pretty sure I know what a 240Z is supposed to look like, and it ain't that!)
Interesting to read your results Paul. I don’t have or use Photoshop so have only suspected (with good reason!) that these ‘super details’ in some recent Astro photos are artifacts added to images, drawn from whatever has trained the AI processing routine. Isn’t the end point then that you could take a picture of M8 (say) and then the AI, trained on a Hubble M8, could just whack in details that might be correct but barley related to what data has been captured? I could imagine it’d be easy to write some program that could scour the web for every pic of M8 and interpolate as required. No point to this of course but such a scenario would blur the line between plagiarizing and processing: you’d be plagiarizing every image not one or two perhaps?
I figure that as soon as you photograph anything, it's not how it 'really looks', so any technique to make it look better is 'fair'. Unless it's for scientific purposes, of course.
The object shown in my example is CTB-1 (very faint) and no image whether 36 hours with a CDK14 or 52 hours with a 30cm Newt has anything like the 127mm detail, which I believe is largely generated by some kind of AI sharpening (could be wrong).
That sharpened close up is typical of Topaz Labs DeNoise AI and Sharpen AI. It creates and adds these stringy details. Dead give away.
I've noticed that on many planetary shots including mine when I used the Topaz AI suite. It's even harder to tell on a Jupiter shot because the clouds are so busy and dynamic.
But what gave it away is when I started doing animations.
The AI thingy would process each frames differently because the pattern slightly changes when the planet rotates and new details pop in and out of existence.
As a test rotate the same deep sky shot let's say 30 degrees clockwise or anticlockwise and apply the same filter to the same crop. I guarantee the result will be different.
So AI could be good (barely) to enhance terrestrial or people shots but it doesn't in anyway reflect the reality. Even having the sliders right down to zero will in some cases leave residual processing in your photos depending on the image scale.
There is no such issues with the previous Topaz Labs 2 or 1 where the sliders are purely manual and you can save a configuration as a preset. Because you have control over every aspect of noise reduction and sharpening you can stop before it gets silly. Also applying the same settings to different pictures and/or orientation of the same will yield similar and consistent results.
In regard to ‘fairness’ (if you think of getting an APOD a competitive process) I was a little surprised that what seems to be an AI sharpened image did receive a NASA gong recently.
The overall image is still stunning I think (the blue OIII rim stands out so well against those rich reds) but to let what I think are regions of generated (possibly AI generated?) detail fly seemed to me a little bit of an oversight. I would have thought some somewhat scientifically grounded processing would be necessary to just get past go. Anyway, I really know nothing about APODs and I’m probably being a geekish prude
Isn’t the end point then that you could take a picture of M8 (say) and then the AI, trained on a Hubble M8, could just whack in details that might be correct but barley related to what data has been captured? I could imagine it’d be easy to write some program that could scour the web for every pic of M8 and interpolate as required. No point to this of course but such a scenario would blur the line between plagiarizing and processing: you’d be plagiarizing every image not one or two perhaps?
I don't think at least in the current versions that the AI is actually trained in that way (As in trained on the detail of a specific image or object) the training is more in terms of what sort of patterns are "Real" and what are not.
I don't think at least in the current versions that the AI is actually trained in that way (As in trained on the detail of a specific image or object) the training is more in terms of what sort of patterns are "Real" and what are not.