View Single Post
  #1  
Old 24-11-2020, 07:00 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,062
Lightbulb AI and creative licence in astrophotography processing

Recently there's been a few discussion threads about "creative" processing of astro photos mostly colorful RGB or NB nebulae.

I'll get one thing out of the way first and put in on the table so my position is crystal clear.

Quote:
To me astrophotography is all about the captured data and its integrity. Trying to coax out the finest details that are within range of the instrument used. Great data requires very little processing.

And when I look at an astrophoto I want to be assured that what I am looking at is real and find the same level of details in my photo or better so I can use it as a reference and compare as to improve next time I get out.

That's my motivation and to me that's the whole point of the hobby. Trying to get the next shot that little bit sharper. That little bit deeper.

As such it is very important to me that the processing is done in a way that obeys certain rules. We all know what they are and are trying to follow them as best as we can. Respect the light, have a good color balance or NB palette, don't over sharpen and keep the masking and color processing to a minimum.

So I have absolutely no interest in framing, the rule of third, complicated multi color blends or conveying an emotion or telling a story. The objects we image are already beautiful "au naturel", have specific colours and structures for a reason which is called science, and don't need layers or make-up or a bio to go with.

All of the above I classify as astro art. It's not astrophotography.
Topaz launched this year their new AI suite which I jumped on as many others did I suspect and started to use on my astro shots. I had used Topaz 1,2 in the past and they were good products so I didn't think twice.

I was at first surprised at the lack of settings and it was pretty much left to do its thing. Some of the results seemed pretty good others terrible. Then I realised the good results (on my planetary shots) were actually too good to be true.

I then came across this video from Damian Peach take on AI processing that I found right here on IIS. Look at time stamp 9:20

https://player.vimeo.com/video/451216105

That was pretty telling and what I saw in my IR Jupiter shots as well. So Damian Peach recommendation was not to use AI software for planetary processing.

I wanted to know why so I asked Ivo who's pretty heavy on all this stuff and he "educated" me, well to a level where I got the gist of it because this stuff is so complicated even the high level concepts. The maths must be horrendous.

He gave me a link to the Topaz website and they themselves say that it is making up details which was a contentious point in another thread.

https://topazlabs.com/tag/sharpen-ai/

Here's the extract that is interesting.

http://www.astropic.net/astro/ivo/Topaz_Sharpen_AI.jpg

"It synthesizes convincing details"
Well... that's where the buck stops for me, sorry. You can be creative all you want until you turn blue in the face but that's just not astro photography anymore.

So now for the tech part and big thanks to Ivo I'm copying pasting the links, videos and everything you wanted to know about AI and why it has problems dealing with astro photos.

/*********************************** *******************/
Those who wish to learn more about convolutional neural nets, their strengths, and their weaknesses do have a look here and the video below:



Neural nets of this type are trained by giving it an example input (for example a blurry image), and an example output (the sharpened equivalent).

Learning works by letting the neural net attempt to make a prediction of the output based in the input. It does so by forward propagating the image (or some derivative thereof) through interconnected layers of neurons. At the last layer (aka the output layer), the error between what was produced and what was desired is measured, and the neurons are "tweaked" (their weights adjusted) through backpropagation in response to the error. This makes the neural net do a little bit(!) better next time it encounters this specific input image or something like it.

A long standing problem with neural networks is that they are usually pretty poor at extrapolating and dealing with situations they "have not seen before". In statistical analysis, this is called "overfitting" :



E.g. if all you have shown your neural net is people, trees, buildings, trains and ants , then every pixel will be regarded as being a part of a person, tree, building, train or ant. Don't be surprised that if you give it car, or a cat, it will try to turn the headlights of the car into carriage buffers, or turn poor Mittens' fluffy chops into a set of mandibles. This "turning into" is a sliding scale; it's not on/off, it is "fuzzy". Mittens won't get be endowed distinct mandibles, but if used for, for example sharpening, the neural net will "sharpen" (synthesize) Mittens's fur in such a way as to bring out the teeth it - erroneously - expects.

So too when you take a neural net, trained on terrestrial images under a select set of circumstances, and apply it to an AP scene.

There are actually a number of complicating issues when applying neural nets trained on terrestrial scenes to AP, causing severe artefacts beyond what is already a synthetic reality;

#1. Terrestrial scenes are non-linearly stretched predictably, whereas AP scenes are not. AP scenes necessitate extremely different dynamic range allocations because things are either ridiculously bright or ridiculously faint with little in between. This stretch affects everything from noise signatures to point spread functions.

#2. Noise signatures are vastly different in (finished) AP images. This is - again - due to different global stretches, but also local stretches.

#3. Point spread functions vary greatly in AP due to vastly different optical trains; central obstructions, focusers, vanes yield vastly different diffraction patterns. This appearance of the diffraction patterns is exacerbated by different stretches globally and locally.

#4. Structures and patterns we find here on earth in terrestrial scenes don't exist in outer space. Straight edges and geometrical shapes are exceedingly rare. Conversely, on earth, many terrestrial scenes have precisely such distinct features, from horizons to haircuts to architecture. We love our straight edges and geometrical patterns here on Earth.

In other words "overfitting" is a huge problem with the Topaz suite when it comes to AP and its results have even less grounding in reality.

You can have a look at the video below to see other applications that exploit similar training on image datasets for other synthesis applications (and sometimes failing in hilarious or terrifying ways).



/*********************************** *******************/
Reply With Quote