Go Back   IceInSpace > General Astronomy > General Chat

Reply
 
Thread Tools Rate Thread
  #1  
Old 24-11-2020, 06:00 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Lightbulb AI and creative licence in astrophotography processing

Recently there's been a few discussion threads about "creative" processing of astro photos mostly colorful RGB or NB nebulae.

I'll get one thing out of the way first and put in on the table so my position is crystal clear.

Quote:
To me astrophotography is all about the captured data and its integrity. Trying to coax out the finest details that are within range of the instrument used. Great data requires very little processing.

And when I look at an astrophoto I want to be assured that what I am looking at is real and find the same level of details in my photo or better so I can use it as a reference and compare as to improve next time I get out.

That's my motivation and to me that's the whole point of the hobby. Trying to get the next shot that little bit sharper. That little bit deeper.

As such it is very important to me that the processing is done in a way that obeys certain rules. We all know what they are and are trying to follow them as best as we can. Respect the light, have a good color balance or NB palette, don't over sharpen and keep the masking and color processing to a minimum.

So I have absolutely no interest in framing, the rule of third, complicated multi color blends or conveying an emotion or telling a story. The objects we image are already beautiful "au naturel", have specific colours and structures for a reason which is called science, and don't need layers or make-up or a bio to go with.

All of the above I classify as astro art. It's not astrophotography.
Topaz launched this year their new AI suite which I jumped on as many others did I suspect and started to use on my astro shots. I had used Topaz 1,2 in the past and they were good products so I didn't think twice.

I was at first surprised at the lack of settings and it was pretty much left to do its thing. Some of the results seemed pretty good others terrible. Then I realised the good results (on my planetary shots) were actually too good to be true.

I then came across this video from Damian Peach take on AI processing that I found right here on IIS. Look at time stamp 9:20

https://player.vimeo.com/video/451216105

That was pretty telling and what I saw in my IR Jupiter shots as well. So Damian Peach recommendation was not to use AI software for planetary processing.

I wanted to know why so I asked Ivo who's pretty heavy on all this stuff and he "educated" me, well to a level where I got the gist of it because this stuff is so complicated even the high level concepts. The maths must be horrendous.

He gave me a link to the Topaz website and they themselves say that it is making up details which was a contentious point in another thread.

https://topazlabs.com/tag/sharpen-ai/

Here's the extract that is interesting.

http://www.astropic.net/astro/ivo/Topaz_Sharpen_AI.jpg

"It synthesizes convincing details"
Well... that's where the buck stops for me, sorry. You can be creative all you want until you turn blue in the face but that's just not astro photography anymore.

So now for the tech part and big thanks to Ivo I'm copying pasting the links, videos and everything you wanted to know about AI and why it has problems dealing with astro photos.

/*********************************** *******************/
Those who wish to learn more about convolutional neural nets, their strengths, and their weaknesses do have a look here and the video below:



Neural nets of this type are trained by giving it an example input (for example a blurry image), and an example output (the sharpened equivalent).

Learning works by letting the neural net attempt to make a prediction of the output based in the input. It does so by forward propagating the image (or some derivative thereof) through interconnected layers of neurons. At the last layer (aka the output layer), the error between what was produced and what was desired is measured, and the neurons are "tweaked" (their weights adjusted) through backpropagation in response to the error. This makes the neural net do a little bit(!) better next time it encounters this specific input image or something like it.

A long standing problem with neural networks is that they are usually pretty poor at extrapolating and dealing with situations they "have not seen before". In statistical analysis, this is called "overfitting" :



E.g. if all you have shown your neural net is people, trees, buildings, trains and ants , then every pixel will be regarded as being a part of a person, tree, building, train or ant. Don't be surprised that if you give it car, or a cat, it will try to turn the headlights of the car into carriage buffers, or turn poor Mittens' fluffy chops into a set of mandibles. This "turning into" is a sliding scale; it's not on/off, it is "fuzzy". Mittens won't get be endowed distinct mandibles, but if used for, for example sharpening, the neural net will "sharpen" (synthesize) Mittens's fur in such a way as to bring out the teeth it - erroneously - expects.

So too when you take a neural net, trained on terrestrial images under a select set of circumstances, and apply it to an AP scene.

There are actually a number of complicating issues when applying neural nets trained on terrestrial scenes to AP, causing severe artefacts beyond what is already a synthetic reality;

#1. Terrestrial scenes are non-linearly stretched predictably, whereas AP scenes are not. AP scenes necessitate extremely different dynamic range allocations because things are either ridiculously bright or ridiculously faint with little in between. This stretch affects everything from noise signatures to point spread functions.

#2. Noise signatures are vastly different in (finished) AP images. This is - again - due to different global stretches, but also local stretches.

#3. Point spread functions vary greatly in AP due to vastly different optical trains; central obstructions, focusers, vanes yield vastly different diffraction patterns. This appearance of the diffraction patterns is exacerbated by different stretches globally and locally.

#4. Structures and patterns we find here on earth in terrestrial scenes don't exist in outer space. Straight edges and geometrical shapes are exceedingly rare. Conversely, on earth, many terrestrial scenes have precisely such distinct features, from horizons to haircuts to architecture. We love our straight edges and geometrical patterns here on Earth.

In other words "overfitting" is a huge problem with the Topaz suite when it comes to AP and its results have even less grounding in reality.

You can have a look at the video below to see other applications that exploit similar training on image datasets for other synthesis applications (and sometimes failing in hilarious or terrifying ways).



/*********************************** *******************/
Reply With Quote
  #2  
Old 24-11-2020, 07:58 PM
Dennis
Dazzled by the Cosmos.

Dennis is offline
 
Join Date: May 2005
Location: Brisbane
Posts: 11,704
Thanks Marc, interesting stuff and a good read.

Cheers

Dennis
Reply With Quote
  #3  
Old 24-11-2020, 09:16 PM
Max Vondel's Avatar
Max Vondel (Peter)
Time Traveller

Max Vondel is offline
 
Join Date: Aug 2008
Location: Bairnsdale VIC
Posts: 385
A big post.
For me what is real is what my brain perceives.
In reality you only have red. green, blue and light sensitive receptors
Any other colours are artifacts of you brains processing.
So who is to say what is real or not.

I come from a nuclear medicine background
Here we try to extract maximum details from a limited number of gamma ray photons.
Say 250K to a million counts within a narrow energy window. The images are then processed
with an assortment of colour palettes to bring out the details. A 140KEV photon of course has no colour. But adding colour certainly helps in diagnosis. There is also an inherent lack of resolution as it is impossible to focus a gamma ray in medical imaging.

So I find that you cannot apply all the same rules as you would for normal daylight photo's/perception to night time viewing or imaging.
Again I would apply the principle of extracting maximum information from a limited data set.

To me there is no holy grail about what is right and how it should be perceived/presented.
The illusion of perfection is largely in our minds
Reply With Quote
  #4  
Old 24-11-2020, 09:26 PM
peter_4059's Avatar
peter_4059 (Peter)
Big Scopes are Cool

peter_4059 is offline
 
Join Date: Jun 2007
Location: SE Tasmania
Posts: 4,532
Top stuff Marc. Interesting reading.
Reply With Quote
  #5  
Old 24-11-2020, 10:08 PM
rustigsmed's Avatar
rustigsmed (Russell)
Registered User

rustigsmed is offline
 
Join Date: Mar 2012
Location: Mornington Peninsula, Australia
Posts: 3,950
nice post Marc

yes topaz ai is an interesting one, results vary wildly on deep sky and planetary, i have dabbled in some experimental work in it. if one chooses to use it, - if you are only after a pretty picture i think you still need to make sure you scrutinise before and after to ensure it is realistic. once you have used it a few times you can spot topaz very easily, especially when liberally applied.

and especially thank you for posting the D Peach video - i think my planetary processing will now be 15% the time it used to be ...

cheers
Reply With Quote
  #6  
Old 25-11-2020, 02:14 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Max Vondel View Post
A big post.
For me what is real is what my brain perceives.
In reality you only have red. green, blue and light sensitive receptors
Any other colours are artifacts of you brains processing.
So who is to say what is real or not.
Good day Peter, I agree that in NB (or other palettes than RGB true color) you can pretty much use the palette of your choice. I did that when I mapped IR to blue or red in bicolor imaging blended with monochrome LUM. As long as you respect the ratios and are forthcoming about which color you map to each channel to emphasize the difference in composition. But then you don't start masking areas or post processing and replacing R,G or B colors based on taste or to emulate something else as you modify not only the ratios but also the visible structures and perception.

Quote:
Originally Posted by Max Vondel View Post
So I find that you cannot apply all the same rules as you would for normal daylight photo's/perception to night time viewing or imaging.
That's a valid point and one flaw that Ivo pointed out with the AI training.

Quote:
Originally Posted by Max Vondel View Post
Again I would apply the principle of extracting maximum information from a limited data set.
Absolutely. If the palette selected works in a way that you can accentuate the structures and boundaries in the real data then that's perfectly fine to choose a mapping that works to that end without any localised editing.

Quote:
Originally Posted by Max Vondel View Post
To me there is no holy grail about what is right and how it should be perceived/presented.
The illusion of perfection is largely in our minds
DSO don't change much over a lifetime so we're all imaging the same thing. Depending on the instrument we should be pretty close in what we're presenting. Not perfection but basic consistency in the data presented without too much creative thinking.
Reply With Quote
  #7  
Old 25-11-2020, 04:06 PM
Peter Ward's Avatar
Peter Ward
Galaxy hitchhiking guide

Peter Ward is offline
 
Join Date: Dec 2007
Location: The Shire
Posts: 8,090
Bravo Marc. Well put (you may be preaching to myself in the choir).

The simple act of revealing the often stunning show that nature provides is often not so simple.

Distant and ancient photons often need some heroic imaging sessions, excellent data subtle teasing out of the details they contain.

The GIGO (garbage-in-garbage-out) principle strongly applies.

Without excellent raw data what follows is doomed to mediocrity.

Stars eggy? There's a filter for that!

Arbitrary application of clown-like makeup via photoshop (or whatever) might make an interesting image. But to coin a well used phrase, a pig with lipstick, is still a pig.

Many astrophotographers who have discovered a new PS plug-in remind me of freshmen who have discovered beer (and often a little too much beer).

My guess is it takes a while before your palette can appreciate a good red or single malt.....
Reply With Quote
  #8  
Old 25-11-2020, 04:28 PM
TrevorW
Registered User

TrevorW is offline
 
Join Date: Aug 2006
Location: Western Australia
Posts: 7,852
Have to agree, an over processed bright pink M42 just doesn't cut it in my eyes
Reply With Quote
  #9  
Old 25-11-2020, 04:34 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
Great post Marc. Feel much the same way, although to my mind a photo is always a mix of both science and art, and that there is also an artistic side to science. This is NOT to lesson the value and rigor of science but rather to think more seriously about the value and rigor of art! In regard to AP I’d much rather marvel at a slightly indistinct detail (perhaps imaging what it might be) rather than marveling at what an AI sharpening routine thinks is there.
Reply With Quote
  #10  
Old 25-11-2020, 04:41 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Peter Ward View Post
Many astrophotographers who have discovered a new PS plug-in remind me of freshmen who have discovered beer (and often a little too much beer).
I think that the crowd of landscape photographers who came to astro photography because of the improvement in DSLR sensitivity and the fact they could start shooting the milkyway or in low light then started to bring their creativity in the mix. Without really understanding what astrophotography is about. This is a clear example of what I call astro art.

https://catmachin.com/


Bit extreme for sure but that's the idea that astrophotography can be played with and passed as real astrophotography in a more subtle way that is a worry. There is a clear distinction that must be made between the two IMHO.
Reply With Quote
  #11  
Old 25-11-2020, 04:43 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Benjamin View Post
rather than marveling at what an AI sharpening routine thinks is there.
Or a person interpretation that's too far from the reality to convey a feeling or something else.
Reply With Quote
  #12  
Old 25-11-2020, 04:45 PM
TrevorW
Registered User

TrevorW is offline
 
Join Date: Aug 2006
Location: Western Australia
Posts: 7,852
In someways the science has gone out of the picture, so to speak
Reply With Quote
  #13  
Old 25-11-2020, 07:43 PM
strongmanmike's Avatar
strongmanmike (Michael)
Highest Observatory in Oz

strongmanmike is offline
 
Join Date: May 2006
Location: Canberra
Posts: 17,150
It doesn't help the growing trend you are concerned about Marc, when major astroimaging comps award some of the made up abstractly manipulated palava, that is suddenly interpreted as "striking" because it supposedly tells a long bow story or evokes a connection or memory... I mean really? So why not AI? what's the issue? as long as it "looks" sharp and amazing and gives the right people goose bumps it could be a winner!...and that's the game for so many

Mike
Reply With Quote
  #14  
Old 25-11-2020, 08:29 PM
TrevorW
Registered User

TrevorW is offline
 
Join Date: Aug 2006
Location: Western Australia
Posts: 7,852
[QUOTE=strongmanmike;1501296]It doesn't help the growing trend you are concerned about Marc, when major astroimaging comps award some of the made up abstractly manipulated palava, that is suddenly interpreted as "striking" because it supposedly tells a long bow story or evokes a connection or memory... I mean really?



You wouldn't be referring to one of those endless images of the milky way as a backdrop to a plethora of varying landscapes, that some people consider to be Astro photography, would you
Reply With Quote
  #15  
Old 25-11-2020, 08:30 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
I guess if you want to win and that's the trend or what's expected then AI would be the least of one's concerns but that sounds like dopping at the TDF. Win at all cost? I like astrophotography because you're looking deeper at something you could look up at the eyepiece and you get to take the photo as a record home to look at it further. That's how it felt the first time I plonked a camera on the scope not knowing what I'd get. The feeling hasn't changed. Not interested in competitions. I do that for personal enjoyment. No pressure. No expectations. No insecurity. Just chilling under the stars.
Reply With Quote
  #16  
Old 25-11-2020, 08:49 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
The topic has me thinking WAY too much BUT astrophotography would seem to me to be the zone where art and science meet (rather than an either/or space). The problem for me is that the word art seems to imply some kind of freedom to do whatever you like to make a pretty picture, or that certain framing or other ‘artistic’ rules need apply. It certainly can mean this, and very often does, but did Picasso, Rembrandt etc. set out to make ‘pretty pictures’? Rather I would suggest they wanted the viewer of their art to see the world and themselves in a different way: to challenge the viewer to see beyond themselves to the world and time they inhabit. Imaging things in the sky CAN share similar goals to this concept of art, asking us to wonder at things beyond our usual frames of reference. Sure, its a long bow to draw saying that my ****ty image of M45 (or whatever) is comparable to ‘Guernica’ or ‘The Night Watch’ but I think the intentions behind both share something very human and altruistic. Going deeper, getting better gear or acquisition techniques are all (for me at least) about seeing more of what’s out there and it’s crucial, if we are to experience this as a challenge to a closed narcissistic world view, that that extra detail is as ‘real’ as possible!l The artistic aim is in sync with science in this way. Fake detail is a lie to both oneself as an ‘artist’ (you’ve achieved nothing in getting over yourself!) and a betrayal of the viewer (we know it’s fake!). It’s also clearly fake science and we feel equally let down.

From another perspective eggy stars, an abundance of noise, a lack of clarity, excessively clipped images, rings around stars also all alert us to the act of photography and processing rather than the striking ‘reality’ of what is being imaged. We don’t need to be Picassos or Rembrandts but just by ‘respecting the light’ we can share in an equally scientific AND artistic pursuit. Chilling under the stars I would say is something of an ‘artistic’ experience too perhaps?

Or you can just put some cool pics on Facebook and leave it at that :-)
Reply With Quote
  #17  
Old 25-11-2020, 08:51 PM
strongmanmike's Avatar
strongmanmike (Michael)
Highest Observatory in Oz

strongmanmike is offline
 
Join Date: May 2006
Location: Canberra
Posts: 17,150
Quote:
Originally Posted by multiweb View Post
I guess if you want to win and that's the trend or what's expected then AI would be the least of one's concerns but that sounds like dopping at the TDF. Win at all cost? I like astrophotography because you're looking deeper at something you could look up at the eyepiece and you get to take the photo as a record home to look at it further. That's how it felt the first time I plonked a camera on the scope not knowing what I'd get. The feeling hasn't changed. Not interested in competitions. I do that for personal enjoyment. No pressure. No expectations. No insecurity. Just chilling under the stars.
I get it and agree, detail and resolution has to be real not perceived (or flat out bogus) or, yes, what's the point

I am continually surprised actually, to see what some people clearly "think" is real detail , it is like they either just have no idea or have simply convinced themselves, intentionally or unintentionally, perhaps from staring at their screen for so long, it all gets lost in unreality, that what they have come up with is real, when it is plainly not and I have noticed this is not limited to the novice imager either!

There is a limit to what processing should do to your data for sure and like most limits, it is different for everybody becasue everyone's reality is slightly different, based on perceptions, interpretations, backgrounds, previous habits and experiences.

I was just pointing to what (mostly) non'astroimagers, seem to be liking the look of and.. that's not necessarily real details in an astronomical object for example, because they don't really know what that is, so the "look" is the deal clincher ...sad but true.

Over the years I have often done comparisons with Hubble or other larger pro telescopes (which I know has annoyed a small number of people ), for exactly this reason, to validate and check exactly what I have recorded is in fact real

Mike
Reply With Quote
  #18  
Old 25-11-2020, 09:06 PM
Benjamin's Avatar
Benjamin (Ben)
Registered User

Benjamin is offline
 
Join Date: Jun 2013
Location: Moorooka, Brisbane
Posts: 906
I was noticing that one particular astrophotographer, doing these AI sharpened images, was focusing on objects that don’t seem to have a Hubble reference, or at least none I could find. A bit conspiracy theory-ish on my part but a way of avoiding a very valid form of critique.
Reply With Quote
  #19  
Old 26-11-2020, 09:42 AM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Peter Ward View Post
My guess is it takes a while before your palette can appreciate a good red or single malt.....
I re-read your post Peter and it got me thinking. It is an excellent analogy. You've nailed it on the head. People who feel the need to selectively process areas of an image or enhance colors in macro structures are not looking at the same things a more experienced imager would and that stems from a lack of knowledge about what they are imaging.

Mike Berthon-Jones said it loud and clear multiple times. There are things that are supposed to look sharp and clearly defined while others are meant to look wispy and fuzzy in the same field. There is a scientific reason behind it. So you can't enhance arbitrarily everything the same way because you assume parts of your nebulosity are too soft. And that knowledge comes with time and experience.

When you do away with selective processing by masking and apply the same settings to your whole field you very quickly realise where the sharp features pop up naturally. Another down side of selective processing is that you miss other areas of your field then it becomes very obvious that you brought up details only in one part of your field.

An example would be a super sharp galaxy with dust lanes while stars and another smaller galaxy off center would be blurry, or in a neb like NGC3372, a sharp keyhole and ridge with everything else equally sharp or blurry when there is a great variety of other fronts in the same field if you know where to look.
Reply With Quote
  #20  
Old 26-11-2020, 02:35 PM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,342
It is a very interesting field of discussion (Processing techniques in general, not specifically AI processing)


Where do you draw the line, in my own case having shot M42 yet again recently so I could compare the old scope and the old camera with the new, you could argue that I should have made a decision between revealing the much more faint dusty stuff spread around the field, or revealing the core of M42, rather than using layer masks to bring through a less stretched core. I tried for a light touch, just enough fainter stuff revealed to make the trapezium stars visible, rather than other images (Which are often still interesting anyway) where the effect is to flatten the brightness range so far that the dust is still visible in the background, but the core area looks more or less like it does visually at the EP.


Regards to AI sharpening or de-noise, I downloaded the trials and had a play and I could never find an application of either that pleased me so I deleted both. I don't have a problem with the concept of the AI de-noise as such but it certainly needs to be applied with a feather, not a firehose. Others seem to get a smoother and more subtle result out of the AI de-noise but it didn't work nicely with my data. I will leave that to them to use or not.


The AI sharpen filter I tried and immediately saw artifacts I have seen in other images and didn't like, it was a bit of an "Ahh, that is where that is coming from" moment for me.

I will put my hand up to using some PS plugins though, but the ones I use don't do anything I couldn't manually do myself (I am happy to pay someone for working them out) and I apply them pretty sparingly, I often reduce the opacity of the resulting layer to reduce their impact.


I don't have any trouble with the concept of astro art either, the images linked above here in this thread I wouldn't go putting on my wall, but I have seen plenty of NB images that I would do.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 01:41 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement