Have added 2hrs worth of luminance to the original image data. Has indeed deepened the nebulosity of the image, but disappointingly, there was no real significant reduction in noise.
Strange thing happened during the capture of the new data. I set up an imaging session in MaximDL ... 11 luminence images of 15 mins each. Images 6,7,8,9 all ended up with out of round stars.
With regard to processing:
I have only done the Digital Development steps to the Luminance data and then to the LRGB image.
Im finding that after this process little else I do has a positive effect on the image .. is this normal ??
Do others find it is a major episode adding new data. Registering all the new images to the old ones, and then reprocessing it all. Certainly a time consuming thing !!!
Nice work Jeff. You're starting to really get your rig dialled in with ever increasing quality output. Personally, I would suggest you don't depend on DDP for your data stretching. Don't get me wrong, its easy and quite effective, but you'll have much greater control over the dynamic range of your data if you stretch it manually using PS levels and curves. If you want to use DDP just to manage your stellar profiles, that's fine. Just blend the stars back in with a layer mask and screen mode. This works quite well. While stretching the data is the crux of image processing as it allows you to choose what you want to display to the audience, there is certainly more that can be done with the data to enhance an image.
Last edited by jase; 14-08-2008 at 06:58 PM.
Reason: typos
You may find it encauraging or discauraging, depending on your state of mind whilst viewing ;-).
Im with Jase, I dont use DDP or any processing in DL or CCD stack (except, critically, deconvolute, the single most powerfull tool there is, nothing like it in PS), you get much more control in PS, and of course history to go back if a step its no good.
Extra data is a case of diminishing returns, doubling exposures to get a significant improvement. In your case at 150mins, 300mins is then required to make a worthwhile improvement.
Your image shows good depth star wise, but not so much the outer neb. Maybe 20 or 30min subs might be worth a try. Stacking more shortish subs (depending on subject) doesnt always improve noise, high readout noise just stacks up too.
And yes, huge numbers of subs are a pain. When you get serious, get CCD stack, its even more time consuming, but excellent control. Unfortunately its Ram based, and then disk caching, which when that kicks in is painfully slow. I find splitting into groups of 10 subs or so, combining median and then summing the resulting masters is quicker than doing all at once. IP does batch processing straight from disk, it puzzles me why the CCD apps havent done this too.
Thanks again Jase and .... Fred .. Your right .. that is one hell of an image .. looks like space DNA, and Im not expecting to be able to put something like that together .. would be nice, but I'm a little more realistic .. my plight at the moment is recognizing which methods of processing best serve me and understanding these so I use some logic in their application rather than trial and error as is the case now ... Its like I keep saying .. there is SO much to know in this game.
Getting serious doesn't require CCDStack Fred. Its about using the tool that works best for you and maximises the data potential. More importantly is to know how to use and when to use the tool. For reference you can perform align and combine functions straight from disk in MaximDL. You still require a reasonable amount of memory, but no where near the requirement to opening all frames. I have not pushed its limits so can't comment on how many subs you can combine. I've done ~20 STL11k subs in one go, no problems - I will acknowledge this is a weak test for mega data.
Didnt know about the DL batch process Jase (or I forgot).
"works best for you" and "maximises the data potential" aint the same subject Jase. The 1st depends on the tools you have, money, time and laziness/skill. The 2nd is what potentially allows the best result with maximum effort, its not subjective. I find CCD stack a pain in the bum to use, DL is much easier, but it allows much more "potential". More control can only be better by definition, if you know how to use the control features. If you start a race with a Farrari, you have "potentially" a better chance of winning than with a "best for me" mini minor .
Perhaps we are talking two different things re: MaximDL processing. You mention batch processing. This means sequencing (MaximDL terminology). In any case, yes, you can record your own sequences for batch processing in MaximDL.
“The Batch Process tool is used to record commands drawn from the MaxIm DL application menu into sequences, sometimes called "macros", and to edit them and play them back. For interactive commands, that is, those which invoke a modal dialog box to obtain control parameters, the complete set of values as they were at the time the command was recorded are all saved in the sequence.” excerpt from manual. In MaximDL5 it the batch process window can be activated from the View menu. This is a very neat feature, for example I have a script that opens the subs and calibrates, perform an equalized screen stretch across them before automatically blinking for me to evaluate quality. Basic, but efficient. I've got a few more trickier ones, but this explains the context of batch processing/sequencing.
What I suspect you mean (being on topic) was the batch processing of files (subs). In MaximDL 4.x you can use the Combine Files menu item under the File menu for disk based processing. Note: This assumes the files are already registered (aligned). MaximDL 5 has the new Stack command that does the same thing, but also aligns and scrutinise the subs i.e. if a sub has stars or other features that don’t meet criteria you define (FWHM, roundness, intensity, contrast) it is rejects from the combine routine. I’m still working this feature over to see what it can do, but so far it looks good.
I think you summarised it nicely – “More control can only be better by definition, if you know how to use the control features”…So in other words use a tool that “works best for you” i.e. you understand how to use it and when to use it. If you don’t understand its functions, capabilities and know what effect its having on the data, it will be difficult to “maximise the data potential”. Hows that for a play on words…and sorry to hijack the thread Jeff - again great image.
CCDstack is a great program but for me its one big weakness is memory control. It creates massive files (a colour version of my LMC was 1.2 gigabytes in CCDstack!).
Maximum I can process of STL11 files in CCDstack is 6. That is using a duo core 2 processor with 2 gigabytes of fast latest type RAM.
If I try 7 I get memory errors. I usually do 5 at a time.
But CCDstack's best feature is its natural work flow matches the steps you have to take to process your images.