Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rate Thread
  #21  
Old 17-02-2014, 11:00 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by alpal View Post
Not really Mike - just by stretching an image you're making it fake.
The naked eye would not see it like that.
The stretching does allow you to see the faint details.
Actually, the fundamental reason we stretch raw CCD images is to make them more like the naked eye would see them. A CCD has a linear response. Our eyes have a logarithmic response.

The "PixInsight" philosophy is that masking and manipulating is fine but the masks must be derived from the data. Painting a mask is a heinous crime

Cheers,
Rick.
Reply With Quote
  #22  
Old 17-02-2014, 12:15 PM
alpal's Avatar
alpal
Registered User

alpal is offline
 
Join Date: Jan 2012
Location: Melbourne
Posts: 3,602
Quote:
Originally Posted by strongmanmike View Post
Ok, well imagine you stretch one version of a galaxy image and not the other, now one will have a lighter background than the other, now layer them with the lighter one below the the darker one, now pick an arbitrary place on the edge of the galaxy on top layer and very carefully start painting (revealing ) the layer underneath..you can very carefully reveal a galaxy arm in a place that is simply a lighter background and voila! a new faint galaxy arm YAY!...except there is no real galaxy arm there at all...of course while few would think of being so blatant the technique can cause some good make believe shapes and features

That's all I am saying

In that particular case you have good point.
What about Ken's videos?
Reply With Quote
  #23  
Old 17-02-2014, 12:17 PM
alpal's Avatar
alpal
Registered User

alpal is offline
 
Join Date: Jan 2012
Location: Melbourne
Posts: 3,602
Quote:
Originally Posted by RickS View Post
Actually, the fundamental reason we stretch raw CCD images is to make them more like the naked eye would see them. A CCD has a linear response. Our eyes have a logarithmic response.

The "PixInsight" philosophy is that masking and manipulating is fine but the masks must be derived from the data. Painting a mask is a heinous crime

Cheers,
Rick.
True - but we are revealing much more than the naked eye could ever see when we stretch an image.
We can hardly even see our own Milky Way - it's so dim -
except if we are in a very dark site.
Reply With Quote
  #24  
Old 17-02-2014, 01:07 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
I think Mike is talking about a different thing. Using a mask and blending it in lighten mode or enhancing what is there and then blending that in on the object of interest is only enhancing data that is there in the first place.

Some brush tools and lassooing can cause an artificial structure based on the borders of the selection. I take he is referring to that.

As far as R Jay goes I don't know that he adds false detail to his images but rather super heavy colour saturation and stretching to coax out every last drop. He may have features enhanced out of proportion to their relative brightness - so relative brightnesses are changed heavily
but I don't think that is creating an artificial structure that does not exist. Much like Hubble narrowband showing up structure that regular colour does not.

Don't forget he uses 20 inch RCs and dark sky sites and super long exposures with good cameras and mounts. The 20 RCOS scope is probably the most premium scope around and its too bad they went out of business, but they got some amazing data from those scopes.

Greg.


Quote:
Originally Posted by alpal View Post
Yes Mike but Ken Crawford wouldn't agree with you:

http://www.imagingdeepsky.com/Presentations.html

He "digs out the details" using masks.

I often use masks to get the desired result but in a different way e.g.
An inverted layer mask is great to get rid of noise in low signal areas
but what about when there is noise in a high signal area?
The only way is to:
reduce noise for the entire picture until the noise in the area you want is reduced
then make a hide all layer mask,
then paint on the mask for the area you want to be reduced.
You can see the results as you paint.
Then you can blur the mask & reduce the opacity to make a perfect seamless blend.

I wouldn't have known that unless I had watched all his videos.

You can also do the same thing to sharpen a certain area while
leaving other areas alone - also -
e.g. at the edge of Centaurus A it is very bright & the tiny details
get lost in this brightness.
A selective adjustment of curves along the edges will show those otherwise hidden details.
You can "dig out the details" in such ways without
having a false image - it's just using the data you have.

Also - if you want a 3D effect then it's great to make the front area of
a target ever so slightly brighter.

Ken's methods allow you to give more impact to your images.

I will continue to use such methods but only in special cases -
it's too much work to do an entire image that way.

cheers
Allan
Reply With Quote
  #25  
Old 17-02-2014, 02:02 PM
Geoff45's Avatar
Geoff45 (Geoff)
PI rules

Geoff45 is offline
 
Join Date: Jul 2006
Location: Sydney
Posts: 2,631
Well, just to come back to the OP's original question which was about colour in astrophotos, here is an interesting thread on the PI forum about colour in astrophotos http://pixinsight.com/forum/index.php?topic=2542.0
Skip through the first half of the thread starter post and then get down to the nitty-gritty.
Geoff
Reply With Quote
  #26  
Old 17-02-2014, 02:37 PM
strongmanmike's Avatar
strongmanmike (Michael)
Highest Observatory in Oz

strongmanmike is offline
 
Join Date: May 2006
Location: Canberra
Posts: 17,152
Quote:
Originally Posted by gregbradley View Post
I think Mike is talking about a different thing. Using a mask and blending it in lighten mode or enhancing what is there and then blending that in on the object of interest is only enhancing data that is there in the first place.

Some brush tools and lassooing can cause an artificial structure based on the borders of the selection. I take he is referring to that.

As far as R Jay goes I don't know that he adds false detail to his images but rather super heavy colour saturation and stretching to coax out every last drop. He may have features enhanced out of proportion to their relative brightness - so relative brightnesses are changed heavily
but I don't think that is creating an artificial structure that does not exist. Much like Hubble narrowband showing up structure that regular colour does not.

Don't forget he uses 20 inch RCs and dark sky sites and super long exposures with good cameras and mounts. The 20 RCOS scope is probably the most premium scope around and its too bad they went out of business, but they got some amazing data from those scopes.

Greg.
Ok going a bit OT here...

One of RJ's images that had me thinking about the possible dangers of painting masks is that of NGC1097. In this (remarkable) image RJ was able to show (and very clearly) two of the jets emanating from NGC 1097.. but strangely, not the third (or VERY faint 4th). The two opposing jets visible in this image however are not the two brightest of the three jets but rather are in fact the brightest and the 3rd brightest jet, the 2nd brightest jet does not show in his image at all, even under an extreme stretch, nothing...why? This doesn 't make any sense unless there was some feature specific enhancements that were not applied to this jet, perhaps because the processor wasn't aware of its existence?.. I dunno but given the clear brightness of the other two jets it should be visible if more global attention was paid to all faint features

Also if you compare THIS version of M83 to THIS version what is going on? Perhaps this is the heavy handed application of brightness adjustments to very specific features, in this case specific areas of the galaxy arms..?

Either way and just to clarify, I am not accusing anyone of any wrong doing here I am just pointing out what in my opinion looks to be examples of what selective imbalanced and perhaps arbitrary brightness enhancements can do and thus question the accuracy and even validity of other faint features revealed in other images that were created with similar techniques

NB: I still think RJ's images are excellent of course and love his colour fidelity

Mike
Reply With Quote
  #27  
Old 17-02-2014, 02:57 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Mike,

I think the reason you can't see R1 in the GaBany rendition of NGC 1097 is because the most visible part is actually outside the frame of his image.

I do agree that the two versions of M83 look very different! I've seen ringing from large scale wavelet operations that looks like that...

Cheers,
Rick.
Reply With Quote
  #28  
Old 17-02-2014, 03:19 PM
strongmanmike's Avatar
strongmanmike (Michael)
Highest Observatory in Oz

strongmanmike is offline
 
Join Date: May 2006
Location: Canberra
Posts: 17,152
Quote:
Originally Posted by RickS View Post
Mike,

I think the reason you can't see R1 in the GaBany rendition of NGC 1097 is because the most visible part is actually outside the frame of his image.
...hmmm? Ahhh yes, you're right ...that might explain it I guess? ok, perhaps that wasn't the best example of what I am talking about there...
Reply With Quote
  #29  
Old 17-02-2014, 03:19 PM
avandonk's Avatar
avandonk
avandonk

avandonk is offline
 
Join Date: Aug 2005
Location: Melbourne
Posts: 4,786
The human visual response is no guide to processing! We constantly in real time, adjust all the parameters of the environment we actually live in.

As an example your colour vision happily automatically adjusts for colour temperature/balance between daylight, tungsten and fluoro.

In the good old days of film one needed filters!

The point of our colour vision is to enhance contrast that does not exist as far as luminance is concerned.

The reason we primates evolved red vision cones was driven by signals that plants gave us. Fruits were ripe and ready for eating when they were red!

If some insects were astrophotographers they would need apart from RGB at least two or three UV filters and a couple of IR filters. Snakes and other reptiles can detect your body heat!

The take home message is to use colour to enhance real features.

I personally do not like the oversaturated images unless it is all narrow band.

Bert
Reply With Quote
  #30  
Old 17-02-2014, 06:05 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
exceptional posts guys - heartfelt thanks.

I initially thought I was working towards a systematic method for representing galaxies etc "realistically". Looks like there is no such thing..

I still have difficulty accepting that the image presentation need not have much reference to an underlying reality - but it looks like everything in an image actually starts out corrupted anyway - having stars of varying sizes is an artefact, available detail is normally determined by the atmosphere (which has nothing to do with the object) and the true colours (whatever that might mean) will never be visible to a human.

I think I am coming to the position that Pixinsight provides a good starting point for acceptable colour (good link thanks Geoff) and that I will just have to tweak beyond that to show what I consider to be interesting in whatever colours I think are appropriate. Darn, I hoped to get to something more concrete than that . Suppose an ex physicist should have known that things get less comprehensible the deeper you look...

so the rules currently are:
- Pixinsight automated processes for basic galaxy colour balance + individual tweaking to personal taste
- no green stars or background colour casts - and saturated stars are white
- emission nebulae can/should have "neon" saturated colour
- enhanced colour saturation and sharpening applied elsewhere with care to enhance features of interest
- no artificial "super" enhancing to create features where none exists
- need to spend some time getting to understand Excalibrator

Again, all of the posts to date really have been extremely helpful and full of information - thanks

regards Ray

Last edited by Shiraz; 17-02-2014 at 08:11 PM.
Reply With Quote
  #31  
Old 17-02-2014, 06:41 PM
avandonk's Avatar
avandonk
avandonk

avandonk is offline
 
Join Date: Aug 2005
Location: Melbourne
Posts: 4,786
The real problem is that HA or NII emitting regions are at best viewed faintly by human vision.

What colour should we give them? red deep red more red ?

It is a bit like saying an x-ray of your leg is not real because you cannot see it in real time.

We can see further than all the generations before because we see far more!

Bert
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 09:29 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement