PDA

View Full Version here: : Strange Bedfellows


Andy01
13-11-2020, 04:34 PM
I though I'd feature these quirky little chaps in the LMC - rarely if ever imaged as a quartet, they lurk surreptitiously in the shadows below the famous Tarantula nebula in the Large Magellanic Cloud. :)

Colourful supernova remnant N206 contains a cluster of massive young stars (NGC2018) and the unique red circular pulsar wind nebula (or SNR).
N206 resides next to it's companions, N204 a Wolf Rayet Star & co.

These fascinating structures near the Large Magellanic Cloud (LMC) contain a large HII emission region referred to as LH-a 120-206 (N206) and a supernova remnant (SNR B 0532-71.0), as described by Williams et al. (2005). The SNR has an estimated age of 23,000 – 27,000 years. The main emission region is about 13 arc min long. The SNR (also often referred to as N206) is the circular structure at the far right of the main nebula.

The LMC is a small satellite galaxy gravitationally bound to our own Milky Way. Yet the gravitational effects are tearing the companion to shreds in a long-playing drama of 'intergalactic cannibalism.' These disruptions lead to a recurring cycle of star birth and star death. Astronomers are particularly interested in the LMC because its fractional content of heavy metals is two to five times lower than is seen in our solar neighbourhood. In this context, 'heavy elements' refer to those elements not present in the primordial universe. Such elements as carbon, oxygen and others are produced by nucleosynthesis and are ejected into the interstellar medium via mass loss by stars, including supernova explosions. As such, the LMC provides a nearby cosmic laboratory that may resemble the distant universe in its chemical composition.:D

Topaz NR was used subtly to enhance some structural details. (Incredible AI software!)

Fun Fact: :D This emission nebula was cataloged by Karl Henize (HEN-eyes) while spending 1948-1951 in South Africa doing research for his Ph.D. dissertation at the University of Michigan. Henize later became a NASA astronaut and, at age 59, became the oldest rookie to fly on the Space Shuttle during an eight-day flight of the Challenger in 1985. He died just short of his 67th birthday in 1993 while attempting to climb the north face of Mount Everest, the world's highest peak. :eyepop:

Full frame Version Here (https://www.astrobin.com/full/9ex38m/0/)

Crop featuring N206 Here (https://www.astrobin.com/full/9ex38m/B/)

AUST2000
13-11-2020, 05:02 PM
Another fine image Andy.
I had a look at that area on Telescopius and there is not much to see but you have made it pop :).

multiweb
13-11-2020, 05:15 PM
Nice pic Andy but I'd be careful with that AI stuff wrt fine details. You're getting better results than the 3.9m AAO. ;)

Andy01
13-11-2020, 05:52 PM
Cheers Andrew - lots of research, planing ya de ya de ya etc... glad you liked it though! :D



Interesting you should say that!
I'm an Astrobin judge, and there's been a lot of discussion behind the scenes recently about this very thing.
A recent Apod/IOTD was awarded to an image purportedly taken with a with a 5' 'frac that had almost as much detail as the Hubble!

So naturally, I thought I'd try out this new AI black magic and see what I could get.
FYI here is a comparison of this neb. My image taken under suburban LP vs. one taken by Josep Druidis with a 20" CDK under pristine dark skies.

So, let's the Ai discussion commence! 1-2-3 go! :lol:

gregbradley
13-11-2020, 05:53 PM
A remarkable image Andy.

Greg.

multiweb
13-11-2020, 06:19 PM
I know there's a debate going on about it. This is just my personal experience and why I don't use it anymore.

I have the whole AI suite and got very excited at first too. I started using it on planetary. I'm pretty sure the first static shots of Jupiter I did in IR had details that were manufactured but I didn't know any better and what I was looking at so naturally I thought "wow! how cool is this". Then I started doing little animations and batched series of static frames with the same settings in Denoise AI and Sharpen AI. That's when I noticed that from one frame to the other surface details started popping out of nowhere so the rotating features were changing from one frame to the other. Upon closer inspection and looking at other planetary shots I quickly realised that the details weren't real. So ok I thought let's use it in a more subtle way. I did high resolution moon shots. Heaps of them available online so you can very easily find a shot at the same FL that you can map 1:1 with yours. I didn't use Sharpen AI because by then I knew it was a no go. So I started using Denoise AI and to my surprise even with all the sliders/settings to zero it still did apply some modifications to the shots. So at the minimum settings depending on the light incidence the details were changed. Some fine crater shapes would change or move, highlights would popup where there was nothing before just by blinking. I then went online and found a video by Damian Peach talking about it and demonstrating what I had experienced. So what I'm saying the AI suite is a no go for any planetary or deepsky. If you want to enhance the feathers of a bird taken with a telephoto lens at a couple of km away, then yeah, alright, knock yourself out. But using it to accentuate fine details that are clearly smaller than the size of your airy disc or if you end up with details a 4m telescope doesn't resolve then you've got to ask yourself. Since when does the AI suite bend the laws of physics?

Bassnut
13-11-2020, 07:53 PM
Excellent image Andy!.
Based on forum usage reports, I tried Topaz gigapixel AI, it was apparently better than Denoise AI and Sharpen AI combined for astropics. To my surprise, binning X2 an image and then gigapixel up scaleing it back to native res gave stunning results !!. (the algorithm denoises and sharpens too). Geez you know, is this fake, if it actually reasonably "sharpens" blurred bits in a sort of deconvolution fashion, then this has some scientific credibility, convolution is a mathimatically reversable effect. That binning helped (halving the res) is interesting, perhaps this allows more accurate processing with reduced noise.

multiweb
13-11-2020, 08:22 PM
Hi Fred there is a massive difference between an algorithm using maths like a PSF for deconvolution for example and a neural network trained to do a task by using a huge pool of data points. AI has its place. If Google wants to develop a self driving car that "knows" what a kerb, a road, a tree or another car is then it has a huge amount of images with street view or even all the pics people post online to train a machine to recognize potential obstacles. Then you call it machine learning and brand the end product AI. I suspect the star removal "AI" was initially fed a huge amount of starfields for a given focal length so it "knows" what a star is in order to remove it. Again it is trained and only as good as the data it was trained with. As far as a Sharpen AI goes you can't make sh!t up out of nothing and calll it a day. So the more you train it, the better the results will be? It works 100% giving you the exact same result for the same settings every single time, otherwise it's unreliable.

Bassnut
13-11-2020, 08:55 PM
If you train AI with enough data points?, absolutely. Given enough (granted thats a huge ask BTW, "enough"), AI simply finds patterns in data required to represent an average of that data to predict and fill in missing data in future examples, in many cases with real time relearning to improve accuracy, the more data, the more accurate it is. Processes like deconvolution (given enough data) actually become part of the developed learning, even though its not explicitly coded. The natural (as in nature that is) effects in astro like seeing are consistent variables effects that given enough data are somewhat predictable (like convolution) . If AI is also given sharp examples, this is very powerfull, to analyse the difference between blurred and sharp images, to learn the difference and apply corrections.

Astronut07
13-11-2020, 09:16 PM
Andy

Stunning images

Cheers

Ryderscope
14-11-2020, 08:56 AM
Interesting debate kicking off here Andy on the ethics of Image processing. Personally, I fall into the team that says that anything that is not entirely based on the captured data and thereby mathematically reversible is straying too far. My understanding from reading this thread is that the AI suite is using external data sources to build its algorithms with a composite image resulting.

This is not to say that these techniques should not be used and it becomes a personal choice for the image processor. My view is that it is acceptable as long as one has full disclosure which you have clearly done. For me, I will not be going down the AI suite path.

The net result here Andy is a fine image with a composition, colour and subject matter that demands immediate attention. I like the way that you bring a new focus and perception to familiar objects.

CS,
Rodney

Andy01
14-11-2020, 09:30 AM
Marc, Fred, Rodney & Ben, thanks for your comments and contributing to an interesting discussion about ethics/techniques and these new trends in post processing. :thumbsup:

Ive loaded a comparison image here, where you can see the starless 7hr Ha stack before (right) and after Topaz NR (left). Previously in my workflow I used Topaz NR sparingly at the very end of the process, but in this case I used it on each stack prior to assembling the image.

I'm not sure if I can agree though, that the Ai is introducing something that isn't there, rather in the style of a scene from the original "Blade Runner" movie, it's enhancing what's there already. :D (with the exception of two artifacts at 10 and 11 o'clock that resemble dustlanes, I could have softened/blurred these but I chose not to for the sake of the exercise.)

As I mentioned though, otherwise I've used it sparingly, it is indeed possible to push this technique further, and that's when Marc's observations certainly ring true. :)

PS: Welcome back Fred, I found your image of this to be very interesting when researching this target. :)

multiweb
14-11-2020, 10:14 AM
That pretty much sums it up. Enhancing means existing data. Relying on external data to enhance is compositing at best or making a guess as what it should look like.

Nikolas
14-11-2020, 10:52 AM
Hi Andy how much of a crop is this from your image?

Was this taken with the TAK?

Andy01
14-11-2020, 12:03 PM
Hi Nik, if you check my OP ^ you can see links to the full frame & cropped versions on AB along with all data & equipment used.
Cheers
Andy

alpal
14-11-2020, 01:02 PM
Hi Andy,
great images and I enjoyed the discussion on Topaz NR.
That's very interesting software.
You sent me on a 1 hour expedition last night investigating it.



Image processing will never please everyone especially
if it is making up or inventing detail that is simply not there.
However - we all like to do a bit of sharpening and noise reduction.
How much to do is subjective or based on opinion and artistic license
rather than strict scientific principles.
I actually like to see a bit of noise still left in a picture
so I can see where the noise floor was in the data -
it adds credibility to the image and NASA and places like CHART32
do it with their images.


Sharpening is another topic and we discussed sharpening worms before here:
http://www.iceinspace.com.au/forum/showthread.php?t=183650&page=2




cheers
Allan

strongmanmike
14-11-2020, 06:54 PM
Interesting discussion :)

Firstly the final image looks really nice Andy, the colouring is rather Metsavainio'ish :)

In the comparison you have made, what processing was done on the right hand image?

Assuming nothing was done already to the right hand image to create, shall we say, fake features...then carefully comparing the two images, it looks pretty reasonable to me :question: :shrug: There are a couple'a "enhanced" features here and there that made me go... yeah?..ooo-kay :question: but overall, it looks generally believable to me. I have previously been jovially vocal about the often horrendous use of decon and wavelets sharpening, resulting in details being made into very obvious little dots and worms that some people seem ok to happily believe are real detail :screwy: :bashcomp: :lol: but in this comparison and if your right hand image had no form of sharpening applied already, then comparing the two, there appears to be little change between features but rather, as you say, just some obvious enhancements and smoothing/blending and on a very fine scale too and the noise reduction looks to have been quite effective and natural looking to me in the intervening spaces :)

Have to say and just like tell tail decon :scared: :lol:, it appears, this application will give an image it's own characteristic "look" so it will likely be rather easy (at least for me ;)) to pick when it was used now, so thanks for that :P... and this is a negative IMO, when you get it right, the viewer shouldn't be able to tell what application/filter has been used on an image. In fact I can see now that other imagers have obviously been applying this, or something similar, to their images and I wasn't sure how they did it and I think have "over" applied it in many cases. It has an air or look about it that will make it fairly easy to over-apply in the quest to satisfy the need to make your images "look" really high res :)...but at least for now and from what you have posted at least, this "look" is more appealing to me than the dreaded decon/wavelet look :thumbsup:

Topaz huh...? :question: :lol:

Mike

multiweb
15-11-2020, 09:00 AM
Topazlabs has been around for a very long time. Topaz 1 and 2 are fine. Just the AI latest versions seem to have a artistic licence.

marc4darkskies
15-11-2020, 12:08 PM
Certainly a pleasing and compelling image Andy. Well done! :thumbsup:

As for the software that creates false detail, I'll be avoiding it like the plague since there's no way of knowing what detail might be created and where. I guess it's kind of like over-saturating and image. The non-cognoscenti seem to love over saturation and those same people will think the detail looks marvellous too. But, ultimately, at the extreme end of the spectrum of applying "enhancements", it's fake. As a former scientist, that's unbearably irksome to me. And please, no comments from anyone wanting to argue about personal choice and what is real vs fake. You know what I mean.

Having said that, I do saturate, control noise (my camera's read noise is horrible) and enhance sharpness to bring out what's real. Imagine how boring it would be if we all posted raw images! The trick is to be diligent (always blink comparing before and after) and use a light hand in the application of enhancement methods. And, like Allan, I like to see a bit of residual noise in my subject and even the background so I know I haven't obliterated any faint detail.

I'm rambling now, but you get my meaning.

Andy01
15-11-2020, 05:29 PM
Thanks for the nice feedback Mike - no tweaks were done to the Ha stack on the right, it's straight out of APP > Starnet. :D



Thanks Marcus, glad you liked the image. :D

Each to their own as the Topaz usage and results I guess. Although I've had the software for some time, I havn't used it this way before at the start of the post processing/image assembly as I usually just run it at the end to minimize noise.

That said, now that I've tried this out - I'm not convinced by the nay-sayers that it's "generating details that arn't there" :question:, but simply refining what is, and in a very advanced manner. ;)

At the end of the day, having transparently and openly shared 'before & afters' of Topaz for the benefit and interest of this community, and enjoyed the resulting discourse, I'm delighted with this result & it made me happy - which is what really counts for everyone in this game :lol:

strongmanmike
15-11-2020, 06:29 PM
Hmmm?...?...yeah, I think I agree :question: at least from the comparison you have posted :shrug:.....:eyepop:....:lol:...but !..no worms!!! ok?? :mad2:

:lol:

sorry...:D

Mike

marc4darkskies
15-11-2020, 10:55 PM
An example of super advanced image enhancement AI ...

irwjager
16-11-2020, 11:27 AM
Rodney, I'd buy you an Internet-beer for that. :cheers: Really well said.
Topaz AI (and Starnet++ for that matter, which creates similar artifacts) is to be avoided if you would like to do astrophotography. The same obviously goes for selective processing (e.g. creating a hand-drawn mask to manipulate part of the image). If you're doing photography, then adding stuff that wasn't there in the first place is not ok, particularly if it's not an "accident" (e.g. accidental ringing/Gibbs phenomenon).

If you do astroart, then anything goes obviously, but I think this particular sub-section of the forum is mostly populated by photographers, not artists. As such, we assume that what you are showing us is real and not the result of arbitrary manipulation of the image, or the result of a hallucination (https://en.wikipedia.org/wiki/DeepDream) of a neural net.

It's not just Topaz AI either; many "standard" noise reduction routines are specific to terrestrial scenes, and built to reconstruct a scene. They are predisposed to reconstruct edges and geometrical shapes, whereas these are virtually non-existent in outer space. Such reconstructions fail miserably in - particularly - DSO datasets, ending up "reconstructing" detail where non exists. The same goes for debayering algorithms - the most "advanced" ones (for example those based on AHD) yield the worst SNR, as they try to "reconstruct" edges and detail based on random noise. Stacking that - now random detail - yields worse outcomes. This is one of the reasons why something "advanced" like PixInsight only offers something simple like VNG - precisely because it doesn't do anything fancy.

The best (and most widely accepted in astronomy) noise reduction algorithms for astrophotography only remove energy from your data and do not introduce it.

strongmanmike
16-11-2020, 12:15 PM
:lol:...OI!...I resemble that comment :mad2:

..do I look deconvolved in that picture :shrug:

marc4darkskies
16-11-2020, 12:25 PM
:) Only the worm was deconvolved. The AI point blank refused to operate on your face! :P :lol: Go figure, it's a beautiful face too! :thumbsup:

multiweb
16-11-2020, 01:19 PM
Good on you to drop in Ivo so we can hear it directly from the horse's mouth. :thumbsup:


Well that illustrate my initial concerns in a big way. :lol: If a neural network can interpolate a jelly fish wideshot into a dog's painting I don't even want to think what it's doing to an astro image. Maybe the lower more subtle setting gives you cats? ;)



What's your take on bayer drizzle and drizzle integration. That's ok? Been doing it a lot of that with the FSQ and as far as I can tell it doesn't manufacture details but I thought I'd check with you anyway. :question:

Andy01
16-11-2020, 07:56 PM
That argument seems a little simplistic Ivo - can we not be/do both ie: create art from scientifically valid astro images? :question:

I'm hugely in favour of pushing established boundaries, all the way over the cliff if need be to create interesting and appealing images, and I'm not alone in that quest! :D

The debate about what's real or not is hugely subjective, as none of us have a warp capable starship to prove any of it, :lol: so why wouldn't you delight in something that helps ground based imaging punch through atmospheric distortion? :shrug:

This is simply the next tool in refinement of post processes. Did everyone scream it's not real when digital imaging arrived? or CMOS, or wavelet sharpening, or starmasks, etc...

Semantics aside, some of these arguments sound like the church telling Copernicus that the Sun revolved around the earth and anything else is heresy! :lol: :lol::lol:

So lighten up peeps, it's relatively new but AI is here to stay, and it's learning - just hope no-one founds Skynet anytime soon and we'll all be fine :rofl::rofl::rofl:

strongmanmike
16-11-2020, 08:03 PM
or decon..?

Yeah I did :rofl:

Mike

multiweb
16-11-2020, 10:10 PM
Andy, mate... what are you on? :lol: What you're describing is astro phantasy not astro photography. Maybe IIS needs an additional section called astro art then that would work.

irwjager
16-11-2020, 10:32 PM
There's nothing about drizzling that makes up detail, so I'd say go your hardest :)
(And if it's good enough for the fine folks at NASA and the HST, then it's good enough for me!)

I totally agree of course that every image is an interpretation of barely visible photon counts. That's already 50% art right there.
We don't need to! Us mere mortals with our puny scopes can readily compare our images to - for example - majestic Hubble closeups to see if what we captured is real if it is i doubt.
I'd be all for it, but that's unfortunately not at all what we're looking at here.
(and we have deconvolution - an actual working tool - for that very purpose ;) )
Even if you are not across how neural nets (and training thereof) work fundamentally, we still don't have to guess whether something is real. We can hop on to the Hubble Heritage website, look up our object and download a dataset (great fun in itself!). Or we can check some of the work done by our peers on AstroBin etc.
AI has been around for a long, long time (and what's considered AI keeps being pushed; remember when a chess computer was considered "AI"?). Like you, I'm also 100% convinced a practical application of deep learning for AP will come along some day, but this sort of simplistic data augmentation by detail hallucination is definitely not it.
(and let me be clear that I really, really want this to happen; I studied AI at the University of Amsterdam and I'm an AP algorithm nut :P)

Until then, wishing you clear skies!

multiweb
20-11-2020, 05:55 PM
No need to wait, Sky AI (https://skylum.com/luminar-ai-landscape?utm_source=fb&utm_medium=sponsored&utm_campaign=Luminar_AI_fb_new_user s_lal_en&utm_content=LaL_Luminar_AI_owners&fbclid=IwAR3LZAPw1e1_y2rYPiXWXDuUms _3mAe4X-4GuZ3p7kWdiCQUOE1I_PH8VXU) to the rescue, replace the whole sky in one go (includes atmospheric effects) :lol:

Andy01
20-11-2020, 06:09 PM
Marc, I usually respect your opinions, but Unless you have something meaningful to add to my OP, please stop wasting my time with this.
We all read your opinion & not everyone agreed with it. These repetitive niggles are getting boring now. Thanks :)

CoolhandJo
20-11-2020, 06:31 PM
Wow. ANDY great stuff. As some of you know I've been out of the "game" for 8 years and only recently started imaging again. You all should be very proud of your images because my 8 year time warp perspective has blown me away by the quality of images. This image is such high quality :eyepop:

Well done Andy and good to read everyone's perspective on AI (which didn't exist 8 years ago)

Imme
20-11-2020, 06:35 PM
My 2c.....

I’m not across this AI stuff and so am unsure of what it consists of so can’t comment from a technical viewpoint however........

....my view is if the digital data that was captured by the imager is the only data used in the picture then that is still an original picture that the imager can call their own work. It’s no different from using a sharpen tool, decon or even assigning colours to a NB filter.
Having said that though, if data is being pulled from an external source and added to the imagers data then it’s a composite and should only be partially attributed to the imager.

I don’t think there a difference between ‘a small enhancement’ or ‘replaced 50% with other people’s work’. Skill, equipment, understanding and knowledge is what makes a great imager. Adding other people’s data to your own makes a great artist.

.....not pointing fingers andy, as I said, I don’t know how this AI stuff works.

multiweb
20-11-2020, 06:38 PM
Awwww... don't be a sour puss sensei. :sadeyes: You asked for the debate to take place in your post.

Peter Ward
20-11-2020, 06:49 PM
WTF? So it's "arty" now to use other people's work??

Great artist....rubbish...more like a great plagiarist. :shrug:

The_bluester
20-11-2020, 07:09 PM
That is not how the Topaz suite works though, it is not importing data from other sources and inserting it, it is extrapolating detail that is not visible from detail that is. The question becomes how much sharpening it can do before it is inventing detail, not revealing it.

People importing data from other sources is a whole other kettle of fish.

Imme
20-11-2020, 07:50 PM
I was trying to make the distinction between imaging and ‘something else’..... not calling it art. Although decoupage is deemed an art form isn’t it?

Imme
20-11-2020, 07:51 PM
As I said.... I’m not sure how it works so having had that explained (purely sharpening) then I don’t see an issue with it.

multiweb
20-11-2020, 07:59 PM
Guys this thread is getting heated.

Ivo knows neural networks and deep learning. He does it for a living. He said AI for astro photography it's not quite ready yet but one day might be and he's looking forward to it.

Andy, I'm not voicing an opinion, just pulling your leg a little ;). You know what they say about "opinions". Everybody's got one.

I explained that I quantitatively showed that Denoise AI and Sharpen AI are manufacturing details even on the lowest settings.

You say that you don't think they do and you are happy to use them. I have absolutely zero problem with that. But you can't call it astro photography mate. It doesn't work this way.

pkinchington
20-11-2020, 08:57 PM
Lovely work again!

Andy01
21-11-2020, 07:37 PM
Hi folks, this is a heads up that this Monday @ 1:30pm AEDT I'll be presenting on the Astro Imaging Channel - where among other things I’ll be discussing the use (and ethics) of AI software for Astrophotography.
Given some of the responses here, it should be fun! :lol:

https://www.youtube.com/c/TheAstroImagingChannel/videos

multiweb
21-11-2020, 08:00 PM
There's another bloke as deluded. He's in the oval office right now :P

alpal
24-11-2020, 12:42 AM
Andy




Thanks Andy,
I watched both videos -
the links are here:
2020.10.04 | Andrew Campbell (Andy's Astro): Creating Dynamic Images.
https://www.youtube.com/watch?v=wlc5qKJKRJk

2020.11.22 | Andy Campbell: Creating Dynamic Images (Revisited)
https://www.youtube.com/watch?v=GjTWAM7O4zo




I found it very interesting.
Our end image product is always so highly processed
that it can't be considered real.
As per the 2nd video - I'll add -

for example think of M42 - the Orion Nebula.

We can't see all the delicate levels of luminance when we look at a target
-it has to be stretched to see that & reveal the beautiful details.
The dynamic range is too large for our eyes to see without digital stretching.
Also as per the 2nd video - I agree -

our eyes don't pick up the beautiful colors of M42 either.
I've seen it from a mount Baw Baw in my 8" f6 scope -
and there was maybe only the faintest hint of any colour in it.
It was a spectacular black and white image to my eye.
The 4 Trapezium stars were like bright pin points - so unlike any photo.


However - when I look at the Jewel Box star cluster -
NGC 4755 -
I can definitely see that one star in it is a beautiful orange colour.
& the other ones are slightly blue.
The photos of it look correct as seen.


Also - I am not a fan of starless images.
As a processing technique they can be removed to make it easier
to process a faint nebula and then put back later - that's all.



cheers
Allan

Andy01
24-11-2020, 05:59 PM
Thanks for tuning in to my presentations Allan, your objective, unbiased feedback is welcome and refreshing to read on this thread. :thumbsup:

For those interested I’ve unpacked Ai For astrophotography from 39:20 to 1:00:20 during the second presentation, followed by insightful comments from leading astrophotographer Eric Coles in the USA.

https://www.youtube.com/watch?v=GjTWAM7O4zo

I’d encourage anyone to watch (as Allan did :D), prior to making further comments here, as considerable preparation has gone into this presentation. :)