Can I do "HDR-like" processing techniques with my QHY9 CCD? Just reading Humayun's comments in Marin Pughs widefield thread here and the 2 lots of exposures he captured to make this great image made me ponder the need to explore the options for multiple exposure lengths in CCD images when time and sky allows. My reasoning:
Although QHY9 is 16bit for great dynamic range, has relatively small well depth
8" Newt sucks in the light and saturates stars in my images pretty quick (particularly with smaller QHY9 well depth)
Pixinsight now has some very nice HDR tools
Logically this should be a good way of going deep for nebulosity on DSOs while maintaining star detail and colours
Presumeably there would be a lot more fiddling about proportional to how many HDR "layers" you collect.
Instead of letting the software determine what pixel should get what value from each image in the "HDR" process, why don't you learn layer masking in Photoshop? You have far more control over the final image, and, don't end up with haloing, saturation defects and gradients which wouldn't exist were you to blend manually.
Instead of letting the software determine what pixel should get what value from each image in the "HDR" process, why don't you learn layer masking in Photoshop? You have far more control over the final image, and, don't end up with haloing, saturation defects and gradients which wouldn't exist were you to blend manually.
H
Similar to the 3 or 4 layers people do to get the trap in M42 I guess H? On my "todo" list is to start doing a separate layer for star processing, so I guess these are all facets of the same things?
The point being that you, as the processor, have the ultimate control in the amount you wish to show through each layer.
Using "HDR" routines leaves everything up to the software with very minimal input on your part.
Furthermore, learning layer masking in Photoshop will open doors to you in all kinds of image processing, not just astrophotographic.
I'm a bit of a zealot in this regard. I hate to see people spend thousands of dollars on hardware, only to let software let them down. Software should not be a weak link in the image capture and processing train.
The year was 1997. If I'm not mistaken, Dr. Paul Debevec received a standing ovation at SIGGRAPH where he presented this. I am yet to see more realistic lighting in an animation to date. I used to be a 3D modeling/animation nerd in my younger years. I sometimes wonder where I'd be now if I kept up with it!
In a nutshell, HDR image processing was used after photographing chrome balls from different angles inside St. Peters Basilica. The basic idea being that the chrome balls capture all the specular light sources inside the building. A model of the building is then created and the chrome balls wrap (as it were) around the model providing global illumination; the highlights then become the light sources. So, there's no "artificial" light sources in the animation.
This, and, in video games, is where HDR excells. At the end of the day, if it works for you/whoever, and your images turn out how you want them to, then, more power to you.
Here is an animated gif of the longest exposure and the HDR. You will see than the dim stuff is about the same but the overexposed stars are far better.
Thanks Bert - I'm always in awe of your widefield work, and this helps show a bit of the magic going on to bring your data together. I'll just have to get my hands dirty give it a try sometime versus layers and Pixinsight. Like most things in this hobby, I expect it will be a case of not which tool is best, but figuring out what to use where and how to make it work effectively.
Wonderful widefield Orion too BTW
Bert, while I'm learning, how would you suggest I stop down a nifty 50 (F/1.8 II) lens when I use it on my 450D? I believe you have made up something from a coffee tin? I'd love to try and bit more widefield this year, and I expect stopping down like this would help a bit more with dew until I sort out a heater solution.