#1  
Old 18-12-2016, 07:02 PM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
What is "Stretching"?

Hi all,

A simple post processing question...

I've seen it mentioned many time in procedures, books and videos, but have yet to see anything that actually describes what stretching is or how to do it.

Am I correct in interpreting the talk, with the motions seen in videos, that stretching is simply the levels and curve adjustments. And the term comes from the visual observation of the histogram, and how it seems to get "stretched" as you make the "right" adjustments in a "properly" exposed set of images?

BTW, I absolutely hate post processing but realise that it is the only way to get anything recognisable, to put in a scrap book or journal, so bear with me while I struggle and ask questions for clarification. I have Chris Woodhouse' book on astrophotography, The Backyard Astronomers Guide, and have read most everything suggested here on the forums, which mostly deal in the more costly side of imaging by referencing the paid programs like Nebulosity, Pixinsight, Photoshop, etc. Which is great when you have them. If you don't, you are left the onerous task of interpreting and translating the descriptions to the application you are using and I'm sure I'm not the only person who has noticed that not every application uses the same set of consistent terms, descriptions, or processes.

Apologies, end of rant... thanks for reading and answering!
Reply With Quote
  #2  
Old 18-12-2016, 07:33 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
Stretching is as you describe, playing with the levels and curves. It is what takes your data away from being "linear", this is what allows you to see both the noise floor of an image and the bright areas at the same time.

Stretching could be simply described and setting the black and white points of an image.
Reply With Quote
  #3  
Old 18-12-2016, 07:47 PM
glend (Glen)
Registered User

glend is offline
 
Join Date: Jun 2013
Location: Lake Macquarie
Posts: 7,033
Kevin, there are a number of online tutorials that show step by step how to bring out the often hidden detail in files. I assume you know how to use a program like DSS to stack your image files to build a final image to process. There are some cheap options when it comes to processing, i used an old free version of Photoshop for a couple of years, then upgraded to the most recent cloud based version of Photoshop CC. I guess because i started with it i find it the easiest of the processing apps to use. I find PI particulary obtuse, but many here swear by it.
In so far as your desciption is concerned, levels and curves ( in Photoshop) are some of the main tools, but you also access to others. The 'art' lies in the balance of the processes you employ, but it is easy to 'push' your data too far. You might find some of Ken Crawford's Photoshop tutorials to be informative.
Free, or near free, apps like the old Photoshop CS version, can gave a limited tool set or lack fine control. Programs like GIMP are favoured by some.

Last edited by glend; 18-12-2016 at 09:13 PM.
Reply With Quote
  #4  
Old 19-12-2016, 11:22 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
The data from a CCD or CMOS sensor is "linear." That means that the value in each pixel is linearly proportional to the number of photons detected. If one pixel detects ten times as many photons as another one, then it will have a value (in ADU) that's ten times larger.

Our eyes don't respond to light linearly like a camera sensor. Their response is often described as logarithmic but the reality is much more complex than this (see http://www.telescope-optics.net/eye_...y_response.htm for gory details.) Stretching is the process of converting linear sensor data to something that matches our vision, boosting low signal values disproportionally.

The methods available to perform this stretching vary between processing packages. In PS it is typically done, as you said, with levels and curves.

Cheers,
Rick.
Reply With Quote
  #5  
Old 19-12-2016, 01:31 PM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
Thanks everyone,


My question comes from already having a photo editor that came with one of my PCs way back when and trying to translate Photoshop tutorials into its language. It isn't quite the same as Adobe. As an example, my program doesn't refer to "Layers" as layers. They are virtual copies, and once your finished you stack the two. Fundamentally the same function, just worded differently.

I've reviewed hundreds of YouTube videos and other resources looking for an explanation for what is technically happening when you move this slider or that slider. So many videos are just that, "move this, move that, see it all looks great!". I appreciate that most people probably understand at some level, what is happening and are happy to copy this instruction or that one, but I'm not one of those people unfortunately. I want to understand the nuts and bolts underneath so that I can make informed decisions about what steps to take in a given scenario. I suspect its a self-destructive trait, that most wouldn't understand why I maintain so vigilantly. It just feels too much like being a lemming and running off the cliff, even though its only a few centimeters step to the bottom.

The technical explanations are fantastic. I'll definitely spend some time with the link you sent Rick. While it doesn't address what to do in Photoshop specifically, it should help me interpret the user manuals explanations and videos.

Thanks for recommending Ken Crawford's Photoshop tutorials Glen, the name doesn't ring a bell with me but I might have dismissed it long ago as I was focused on finding info for my particular photo editor. I've adopted another tack and am pursuing the science behind photo editing instead of the mechanics and perhaps these will be helpful.
Reply With Quote
  #6  
Old 19-12-2016, 01:48 PM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
Kevin,
If you are DSS and Canon person, this thread may help:
http://www.iceinspace.com.au/forum/showthread.php?t=48651&highlight=di gital+photo+professional&page=2, Post #37
Reply With Quote
  #7  
Old 19-12-2016, 09:39 PM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
Thanks Bojan,

Every little bit helps.

What I want to understand is, what is changing as you adjust the curve? The data can only be manipulated certain ways. If the histogram shows the magnitude of photons for each pixel, across a particular range (0-255), what is happening to the data as you move the curve? Are you simply changing the reference points for how any particular program interprets the data, as Col mentions.

Is it as simple as you are removing all the empty points at each end of the 0-255 histogram so that the signal to noise ratio artificially improves?

This is kind of understanding I find myself craving... as I mentioned possibly a little self-destructive in that some may think that it kills all the fun in creating an image.

A somewhat ironic aspect of all this is that I'm not really interested in going down the imaging path. Too expensive and once an image is taken and processed, I'd be done. My interests are more on the observational side and simply capturing moments. "crazy it is... young padiwan... "
Reply With Quote
  #8  
Old 20-12-2016, 06:26 AM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
By adjusting the curves, you are changing the starting point (bias) and polynomial factors, used to calculate the display brightness for that particular pixel value on the screen.

For example, your data (original pixel value) could be say 128, but the displayed value will be 255. Or, the original pixel value is 10, the displayed value is 120. For pixel value =5, display value could be 0. (please note the factor value in this example is different for different original pixel value - small original values are amplified more than large values).
The processed picture is saved without the curve polynomial in most cases (DPP can save them, though).

Things are a bit more complicated than this, because the eye response to illumination is logarithmic, not linear.. that is why you will find in literature the term "gamma" and it's associated factor.

Signal to noise ratio can't possibly be improved by curve adjustment, however it is possible to offset the noise floor so details are brought up.
The price paid for this offset is increased visibility of digitalisation noise (coarser tone resolution).

BTW, DPP and DSS operate with 12-16 bits of resolution, not 8 bit (0-255) like you mentioned.
Only the final image (in jpg format) is saved with 8-bit resolution
Reply With Quote
  #9  
Old 20-12-2016, 02:37 PM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
Thank you Bojan,

Quote:
By adjusting the curves, you are changing the starting point (bias) and polynomial factors, used to calculate the display brightness for that particular pixel value on the screen.

For example, your data (original pixel value) could be say 128, but the displayed value will be 255. Or, the original pixel value is 10, the displayed value is 120. For pixel value =5, display value could be 0. (please note the factor value in this example is different for different original pixel value - small original values are amplified more than large values).
The processed picture is saved without the curve polynomial in most cases (DPP can save them, though)
Appreciate the explanation, that's what I've been striving towards.

Quote:
Things are a bit more complicated than this, because the eye response to illumination is logarithmic, not linear.. that is why you will find in literature the term "gamma" and it's associated factor.
If I put 2 and 2 together, gamma is the log curve that is used by the application to display the image and we adjust that to compensate for what our eye sees and what the camera saw. Correct (at least close?)?

Quote:
Signal to noise ratio can't possibly be improved by curve adjustment, however it is possible to offset the noise floor so details are brought up.
The price paid for this offset is increased visibility of digitalisation noise (coarser tone resolution).
Re: SNR, as radio frequency is one of my specialties, I do understand SNR. Its an easy concept to transfer to imaging. I used the term "artificially" for that reason. Its much the same for digital signal processing in radios (actually it is exactly the same), while you can devise algorithms to pull out bits from the noise based on patterns, you are still restricted by the amount of signal present and the noise floor. Substitute bit error rates with "coarser tone resolution".

Quote:
BTW, DPP and DSS operate with 12-16 bits of resolution, not 8 bit (0-255) like you mentioned.
Only the final image (in jpg format) is saved with 8-bit resolution
My 0-255 reference came from something I had just watched as I have been seeking more detail about what is a histogram and how to read it. They may have simply used the concept of a point and shoot camera that only puts out JPEGs. Interestingly, does that mean that the histogram on a DSLR camera is measuring the full resolution of a RAW file (is that range known?) Does it pick a format to display in, 12 or 16 bit?

So many questions come to mind... more reading and researching...
Reply With Quote
  #10  
Old 21-12-2016, 07:40 AM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
Hi Kevin,
Not sure about your last question in general (about histogram on camera)... but probably this is the case with Canon (if image is saved in RAW).

As for your question about gamma, the answer is yes, not only close but spot on.
Reply With Quote
  #11  
Old 21-12-2016, 08:38 AM
glend (Glen)
Registered User

glend is offline
 
Join Date: Jun 2013
Location: Lake Macquarie
Posts: 7,033
Kevin, I believe that the Canon Liveview display is a processor 'stretched' version of what is captured (in RAW). Looking at the oncamera histrogram (if you have selected to display it) it will seem to have a medium stretch applied, otherwise you would not see much as the data would be too linear.
Reply With Quote
  #12  
Old 21-12-2016, 10:21 AM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,425
Quote:
Originally Posted by sharptrack2 View Post
My 0-255 reference came from something I had just watched as I have been seeking more detail about what is a histogram and how to read it. They may have simply used the concept of a point and shoot camera that only puts out JPEGs. Interestingly, does that mean that the histogram on a DSLR camera is measuring the full resolution of a RAW file (is that range known?) Does it pick a format to display in, 12 or 16 bit?
The analog signal is read off the sensor and converted to digital by the ADC, which is then stored in the RAW file. The resolution of the ADC in most Canons is 14-bit, but there are many cameras on the market that are 12-bit. Astro CCDs tend to be 16-bit. For 14-bit, that gives you values from 0-16383.

From what I understand, the raw image is converted to JPEG for the rear view screen and histogram...digital cameras are very efficient at doing that conversion, but that isn't to say that the JPEG is an accurate representation of the raw data.

Again for a Canon, you should expose to bring the main peak of the histogram off the left hand edge at least. The main peak of the histogram is the background sky (unless you're imaging something Eta Carinae and it fills the whole FOV ) and you want this to overwhelm the read noise of the camera as much as possible, as otherwise when you try to stretch your signal to bring out the fainter detail you will also be stretching the ugly noise too

Typically, the bias noise has values around 512 - 2048 (depending on the camera), so the range of values where you want to be recording signal is between that and the maximum. Of course, once you hit the maximum values the object (be it a star or otherwise) is over-exposed and there's no coming back from that! IMO find the sweet spot is where the stars in the frame are not over-exposed but where your signal is over the above noise. It's not easy to judge that from the back screen of the camera
Reply With Quote
  #13  
Old 21-12-2016, 01:51 PM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
Thanks Guys,

Appreciate the details. This is slowly coming together for me and I'm sure I will eventually stop thinking and just do, but it is immensely helpful for me to get this deep technically.

Displaying a JPEG version makes sense as it is easy to do and would serve the purpose. I'll have to somehow make a comparison of the display against my editor and see if the resolution of the histogram is significantly different.

The first couple of images I have posted were just so-so, since I was using a mediocre telescope and was not very particular in my mount setup. It was all just to get something to practice with.

But now I have another set of data that I have started to work with, but stopped, when I decided I needed to know more about processing. I would like to invite anyone who is interested to take a look and maybe relate some of what we have been talking about here to this set of data. Here is a link to the data on my OneDrive cloud storage. I don't remember all the details, and my work computer won't natively read RAW files, but here is what I do remember... Canon 450D, Bresser-Messier 127mm f/5.5, SW HEQ5 Pro, not guided but drift aligned. 40 subs, 30-40 sec ea., ISO 400 (I think). There is a set of darks, and what I hope are bias subs (fast shutter speed, same ISO). Sorry I didn't name them, I just looked at the details when I started stacking... I'll get my workflow together soon!

Here is the link...

https://1drv.ms/f/s!AlH7c1YqjBS4g9pUq5-foUIJn2LsWg


I've already run this once through DSS and saved my end result as a TIFF, and welcome any comments, particularly around Dunk's recommendations of exposure. Is the data good, bad, or indifferent? Under-exposed? Not long enough. There is a strong source of light pollution (Gosford CBD) below the constellation so I was trying not to bring too much of that into the image.
Reply With Quote
  #14  
Old 21-12-2016, 02:23 PM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,425
Had a quick look at one of the lights and it looks like it could use a teensy bit more exposure...so what I'd suggest is try, say, 50-60 seconds. If you find the Trapezium stars are then blown out, dial it back. M42 is such a tricky target to practice on

Obviously you'll know this as you weren't guiding, but there are mucho star trails. Certainly check your drift alignment more closely as I wouldn't expect to see such long trails in that time period on a HEQ5. Check your balance too as that could throw it off too

The thermal noise is also a problem, but that's a warm summer night problem
Reply With Quote
  #15  
Old 22-12-2016, 10:23 AM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
Thanks Dunk,

M42 is probably high enough now that I can get away with longer subs. Will try when I have another clear evening to setup. I do seem to pick the most challenging starting point even though I don't intend to. Maybe I should try the Tarantula nebula instead. Seems pretty popular at the moment.

Which leads me to the star trails, can you point me to an example? I had zoomed in and thought they looked pretty good considering my experience so far. I noted some elongation on a couple but most of the stars in the center looked reasonably round, IMHO. The first image definitely has trails and I think that's because I may have bumped the mount or it moved when I walked around it (only clear space not flooded with street light is on the grass), the following subs look much better. I also note that the elongation increases as you get out to the edge, which I concluded was an artifact of the telescope being a refractor, coma?

Balance is something I checked at the beginning but I did have to add an extension tube to get more focus range, so will double check when I setup again. And I'll see what I can do about being more accurate with the alignment, but the stars I have available are not optimum which doesn't help as I am just learning how to align that way.

The 450D has some noise issues to begin with, and I'm not invested enough yet to consider modifying with a cold finger. Has been my first choice to start learning, but will move to my Canon 5D or 7D a little later, maybe even try my Nikon D7200.

Thanks again for all the input!
Reply With Quote
  #16  
Old 22-12-2016, 12:15 PM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,425
Kevin, must have been the random sub I looked at then (I picked only one)

If you're seeing elongation towards the edge I'd try varying the spacing from your flattener to the camera, that's usually the cause. Refractors don't usually suffer with coma.

The Tarantula is another target with quite high dynamic range, but is a bright target. Pick your poison It's well placed at the moment which makes it popular.

Adding an extension tube can be enough to upset the balance unfortunately, as it pushes the weight of the camera further out. It might not be much, but it's worth checking.

Sounds like you've got some interesting cameras to experiment with
Reply With Quote
  #17  
Old 22-12-2016, 08:46 PM
sharptrack2 (Kevin)
Registered User

sharptrack2 is offline
 
Join Date: Oct 2015
Posts: 331
Thanks Dunk,

I don't actually have a field flattener at the moment. On my wish list.

Friday night is looking really good at the moment. I am hoping I might be able to get out of work early and get home to setup, try out some of the recommendations.

If nothing else get in some quality observing time... haven't taken much time to just look around lately, always focused on fixing, testing, or just fussing with things.
Reply With Quote
  #18  
Old 23-12-2016, 01:28 PM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,425
Ahhh well not having a flattener will make it susceptible to field curvature, especially as it's a fairly fast scope. If it's in focus in the centre there's not much else you can do about the rest.
Reply With Quote
  #19  
Old 23-12-2016, 02:12 PM
glend (Glen)
Registered User

glend is offline
 
Join Date: Jun 2013
Location: Lake Macquarie
Posts: 7,033
Quote:
Originally Posted by Camelopardalis View Post
Ahhh well not having a flattener will make it susceptible to field curvature, especially as it's a fairly fast scope. If it's in focus in the centre there's not much else you can do about the rest.
Well you might be able to crop out the corners or edges if it is limited curvature.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 05:02 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement