Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rate Thread
  #21  
Old 23-10-2020, 04:49 PM
tim.anderson (Tim Anderson)
Registered User

tim.anderson is offline
 
Join Date: Nov 2014
Location: Cowra
Posts: 213
Quote:
Originally Posted by Peter Ward View Post
What? and do a roll-over comparison as well? I'll pass.

That might not be prudent
Heh. Yes, probably.
Reply With Quote
  #22  
Old 23-10-2020, 05:00 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Well they must have improved their products because I have found them to be very hit or miss with little control available to the user.

Greg.
Reply With Quote
  #23  
Old 23-10-2020, 05:33 PM
tim.anderson (Tim Anderson)
Registered User

tim.anderson is offline
 
Join Date: Nov 2014
Location: Cowra
Posts: 213
Quote:
Originally Posted by gregbradley View Post
Well they must have improved their products because I have found them to be very hit or miss with little control available to the user.

Greg.
Same question. Can you provide an example?

Tim
Reply With Quote
  #24  
Old 25-10-2020, 09:58 AM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
Another close up 127mm triplet image (an incredibly detailed 11 hours) compared to a CDK14 image (36 hours or so). The 127mm looks pretty cool on Facebook, but even then, how could you feel okay doing this to the detail in your data (assuming this is a result of excessive AI sharpening)? I guess to give an ‘impression’ of sharpness or detail at a certain scale? Is that a thing??! Maybe I’m just a bit too in the realist camp at present...
Attached Thumbnails
Click for full-size image (6323A865-A5FF-456F-BDA7-E6ECDC314F06.jpg)
95.6 KB97 views
Click for full-size image (70D0D798-5395-40D3-99B9-FDE4BDB44920.jpg)
192.1 KB92 views
Reply With Quote
  #25  
Old 25-10-2020, 03:34 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Quote:
Originally Posted by tim.anderson View Post
Same question. Can you provide an example?

Tim
I don't have any saved images just what I noticed during the processing of some past images that I then rejected.

Greg.

Last edited by gregbradley; 25-10-2020 at 08:53 PM.
Reply With Quote
  #26  
Old 25-10-2020, 03:36 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Quote:
Originally Posted by Benjamin View Post
Another close up 127mm triplet image (an incredibly detailed 11 hours) compared to a CDK14 image (36 hours or so). The 127mm looks pretty cool on Facebook, but even then, how could you feel okay doing this to the detail in your data (assuming this is a result of excessive AI sharpening)? I guess to give an ‘impression’ of sharpness or detail at a certain scale? Is that a thing??! Maybe I’m just a bit too in the realist camp at present...
I don't see anything wrong with sharpening if it works. When it failed for me it usually accentuated noise or wrecked the stars.
The CDK data must have been under terrible seeing as there is no way a 127 low end refractor is going to compete with a properly collimated and decent seeing CDK14.

Unless it was lots of short sub lucky imaging shots using a CMOS camera versus long exposure CCD.

Longer focal length has the achilles heel of requiring better seeing.

Greg.
Reply With Quote
  #27  
Old 25-10-2020, 06:27 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
Thanks for your thoughts on this Greg as it helps clarify some ideas :-) The object shown in my example is CTB-1 (very faint) and no image whether 36 hours with a CDK14 or 52 hours with a 30cm Newt has anything like the 127mm detail, which I believe is largely generated by some kind of AI sharpening (could be wrong). The stars in the image I believe (again could be wrong) aren’t affected by the sharpening as the sharpening is probably done on a starless image with the stars added back in later. However even a CDK14 or 30cm newt can be poor no matter how many hours I guess. Also, to be clear, I’m not having a go at anyone, as the Facebook version on my iPhone (which is how some would consume their images) is pretty cool and the examples I used are from full res screen shots on Astrobin and aren’t really visible with various compression algorithms. Also there are other many images turning up that are very similarly sharpened although easier to detect artifacts as a Hubble comparison is available.
Reply With Quote
  #28  
Old 25-10-2020, 08:55 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
That dotty grain in the 127 image does remind me of oversharpening artifacts or excessive decon artifacts.

Something is odd about the image. Perhaps it is as Peter suggested and its a bit Hubble imagery pasted in. Who knows.

Greg.
Reply With Quote
  #29  
Old 25-10-2020, 10:22 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
It was more that sharp dark ‘pillar’ and the other ‘features’ in the 127mm image (the first one) surrounded by the smoothed out background that suggests some kind of selective AI texturing to me. The noise profile of the CDK image (the second one) seems more natural and doesn’t feel like sharpening is interpolating additional detail. In all images I’ve seen the ‘pillar’ or dark bump is never defined in regard to a sharp edge let alone that jagged top with the bright linear features next to it. Again at a lower resolution the 127mm pillar is a sharp ‘something’ but close up it looks like a jagged tower rising through the mist against a blood red sky. And perhaps that’s the dilemma here - what I think is oversharpened and interpolated detail could be more dramatically suggestive, even if it isn’t the actual product of a star going supernova (which at a micro level on a very faint object could be quite regular and not very suggestive of anything). Not sure there is a Hubble version of this object to check against which is why I thought a deep CDK14 version might suffice, although as you point out Greg, these bigger scopes do have their issues. Anyway, enough from me. Thinking about it too much when I should be trying to solve my own issues! Clear skies!

Last edited by Benjamin; 26-10-2020 at 08:19 PM.
Reply With Quote
  #30  
Old 27-10-2020, 08:12 AM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,342
The CDK stars ones look much more natural than the stars in that 127mm image to me, those look very much like what I saw when I gave the AI sharpening routine a try. At a very low level of sharpening they lost the gradual fading that comes out of normal seeing and became hard, flat blobs, I found it a very unattractive look, it also produced artifacts very quickly and not just sharpening worms in the background areas. A few small patterns of stars suddenly looked like someone had dropped text X's on the image, not nice at all. I also do not find the sharpening result on the nebulosity to look very natural. It would be interesting to do an integration with significantly more more or less data and run the AI sharpen on both to see if it produces the same "Detail" both times.

Similarly the AI denoise just did not produce a natural looking image with my data sets so I uninstalled both in short order.
Reply With Quote
  #31  
Old 27-10-2020, 09:19 AM
Sunfish's Avatar
Sunfish (Ray)
Registered User

Sunfish is offline
 
Join Date: Mar 2018
Location: Wollongong
Posts: 1,909
Becomes a little less like photography and a little more like painting. That image is not so much sharpened as enhanced with detail which may be assumed from other examples but may not really be present.

It is always the unexpected details which are more interesting than the expected assumptions.
Reply With Quote
  #32  
Old 27-10-2020, 10:41 AM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,342
I would actually take the statement a step further than that. It is not really even assumed detail or expected detail, I found when the sharpening was pushed beyond a very small amount it produced detail that really does not exist. It is more "predicted" detail than anything else.

I tried the AI sharpen on a scanned picture of a car I had many years ago and I have to say it failed horribly. It is an older car with a rubber seal fitting the windscreen rather than the bonded glass common since the early 80's, any amount of "Sharpening" at all and the paint on the A pillar was extended on to the black rubber of the windscreen seal, and other body styling lines suddenly appeared (I am pretty sure I know what a 240Z is supposed to look like, and it ain't that!)
Reply With Quote
  #33  
Old 27-10-2020, 12:53 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
Interesting to read your results Paul. I don’t have or use Photoshop so have only suspected (with good reason!) that these ‘super details’ in some recent Astro photos are artifacts added to images, drawn from whatever has trained the AI processing routine. Isn’t the end point then that you could take a picture of M8 (say) and then the AI, trained on a Hubble M8, could just whack in details that might be correct but barley related to what data has been captured? I could imagine it’d be easy to write some program that could scour the web for every pic of M8 and interpolate as required. No point to this of course but such a scenario would blur the line between plagiarizing and processing: you’d be plagiarizing every image not one or two perhaps?
Reply With Quote
  #34  
Old 27-10-2020, 01:26 PM
Stonius's Avatar
Stonius (Markus)
Registered User

Stonius is offline
 
Join Date: Mar 2015
Location: Melbourne
Posts: 1,495
I figure that as soon as you photograph anything, it's not how it 'really looks', so any technique to make it look better is 'fair'. Unless it's for scientific purposes, of course.


Cheers


Markus
Reply With Quote
  #35  
Old 27-10-2020, 01:43 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Benjamin View Post
The object shown in my example is CTB-1 (very faint) and no image whether 36 hours with a CDK14 or 52 hours with a 30cm Newt has anything like the 127mm detail, which I believe is largely generated by some kind of AI sharpening (could be wrong).
That sharpened close up is typical of Topaz Labs DeNoise AI and Sharpen AI. It creates and adds these stringy details. Dead give away.

I've noticed that on many planetary shots including mine when I used the Topaz AI suite. It's even harder to tell on a Jupiter shot because the clouds are so busy and dynamic.
But what gave it away is when I started doing animations.
The AI thingy would process each frames differently because the pattern slightly changes when the planet rotates and new details pop in and out of existence.

As a test rotate the same deep sky shot let's say 30 degrees clockwise or anticlockwise and apply the same filter to the same crop. I guarantee the result will be different.

So AI could be good (barely) to enhance terrestrial or people shots but it doesn't in anyway reflect the reality. Even having the sliders right down to zero will in some cases leave residual processing in your photos depending on the image scale.

There is no such issues with the previous Topaz Labs 2 or 1 where the sliders are purely manual and you can save a configuration as a preset. Because you have control over every aspect of noise reduction and sharpening you can stop before it gets silly. Also applying the same settings to different pictures and/or orientation of the same will yield similar and consistent results.
Reply With Quote
  #36  
Old 27-10-2020, 02:18 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
In regard to ‘fairness’ (if you think of getting an APOD a competitive process) I was a little surprised that what seems to be an AI sharpened image did receive a NASA gong recently.

https://apod.nasa.gov/apod/image/201...issamAyoub.jpg

The overall image is still stunning I think (the blue OIII rim stands out so well against those rich reds) but to let what I think are regions of generated (possibly AI generated?) detail fly seemed to me a little bit of an oversight. I would have thought some somewhat scientifically grounded processing would be necessary to just get past go. Anyway, I really know nothing about APODs and I’m probably being a geekish prude
Reply With Quote
  #37  
Old 27-10-2020, 02:43 PM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,342
Quote:
Originally Posted by Benjamin View Post
Isn’t the end point then that you could take a picture of M8 (say) and then the AI, trained on a Hubble M8, could just whack in details that might be correct but barley related to what data has been captured? I could imagine it’d be easy to write some program that could scour the web for every pic of M8 and interpolate as required. No point to this of course but such a scenario would blur the line between plagiarizing and processing: you’d be plagiarizing every image not one or two perhaps?

I don't think at least in the current versions that the AI is actually trained in that way (As in trained on the detail of a specific image or object) the training is more in terms of what sort of patterns are "Real" and what are not.
Reply With Quote
  #38  
Old 27-10-2020, 02:50 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
Quote:
Originally Posted by The_bluester View Post
I don't think at least in the current versions that the AI is actually trained in that way (As in trained on the detail of a specific image or object) the training is more in terms of what sort of patterns are "Real" and what are not.
Thanks for clarifying :-)
Reply With Quote
  #39  
Old 27-10-2020, 03:16 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Benjamin View Post
I was a little surprised that what seems to be an AI sharpened image did receive a NASA gong recently.
It's an AIPOD. They're launching a new trend.
Reply With Quote
  #40  
Old 27-10-2020, 03:17 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
Quote:
Originally Posted by multiweb View Post
It's an AIPOD. They're launching a new trend.
Reply With Quote
Reply

Bookmarks

Tags
ai sharpen, apod, image processing

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 09:25 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement