Go Back   IceInSpace > General Astronomy > Radio Astronomy and Spectroscopy
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Rate Thread
  #21  
Old 10-03-2017, 07:08 PM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,906
Steven,
PM sent.
We need to discuss off line then bring it back to the forum.
Reply With Quote
  #22  
Old 10-03-2017, 08:06 PM
StuTodd
Registered User

StuTodd is offline
 
Join Date: Dec 2014
Location: Dunedin, New Zealand
Posts: 353
Hi.
I think youre barking up the wrong tree there, but some astrophotography methods in spectroscopy can be used..
I am sure Ken will explain.

Stu

Last edited by StuTodd; 11-03-2017 at 05:24 AM.
Reply With Quote
  #23  
Old 10-03-2017, 09:59 PM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Steven,

Dither and drizzle is a legitimate way of improving resolution in undersampled images even in scientific imaging but are your images actually undersampled ? (ie is your star image FWHM less than 2 pixels?)

Aligning and stacking (as typically used in planetary imaging) is also a legitimate tool to maximise resolution when using the Star Analyser. I am wondering if that is what we are seeing here. The drizzle algorithm may align them as part of the process. When you normally combined your images did you align them? If not can you try a simple align and stack to see if that has a similar effect?

Robin
Reply With Quote
  #24  
Old 11-03-2017, 10:37 AM
sjastro's Avatar
sjastro
Registered User

sjastro is offline
 
Join Date: Jun 2007
Posts: 2,926
Quote:
Originally Posted by robin_astro View Post
Hi Steven,

Dither and drizzle is a legitimate way of improving resolution in undersampled images even in scientific imaging but are your images actually undersampled ? (ie is your star image FWHM less than 2 pixels?)

Aligning and stacking (as typically used in planetary imaging) is also a legitimate tool to maximise resolution when using the Star Analyser. I am wondering if that is what we are seeing here. The drizzle algorithm may align them as part of the process. When you normally combined your images did you align them? If not can you try a simple align and stack to see if that has a similar effect?

Robin
Hello Robin,

Given seeing conditions of 2 arcseconds and my setup providing a scale of 1.1 arcseconds/pixel "ordinary" images are definitely not undersampled and almost meet the Nyquist requirement for sampling.

Given in the combined images the spectra are in focus and the stars are out of focus, I not even sure what constitutes an undersampled or oversampled image in this case.
Data was collected over two days and both sets of data were therefore dithered.
The software for normally and drizzle combined images required the individual images to be aligned prior to combining otherwise registration errors would be apparent.

If one examines the ratio of the peak height of the drizzle combined over the normally combined, for the H-Gamma, H-Beta and H-Alpha peaks as a function of "enhancement", the ratio declines as wavelength increases.
This could be consistent with the idea that since the FWHM of the Airy disc increases with increasing wavelength, the spectral image becomes progressively more "oversampled" at longer wavelengths and the effectiveness of drizzling decreases. (Pure speculation).

In the attachment are cropped images for the drizzle and normally combined raw spectra. The left hand drizzle combined attachment looks more like it has been contrast enhanced (note the burnt out stellar images).

Regards

Steven
Attached Thumbnails
Click for full-size image (drizzle_stacked_cropped.jpg)
85.6 KB18 views
Click for full-size image (normal_stacked_cropped.jpg)
69.9 KB14 views
Reply With Quote
  #25  
Old 11-03-2017, 11:11 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Steven,

Yes I agree your images are not undersampled. The width of the features we are looking at (eg the Balmer lines) are much wider than the pixel size. In this case, as the normal image was also with the sub exposures aligned, the algorithm is enhancing the image contrast in a way that I would not have expected it to do and whatever the algorithm is doing must be suspect so probably best avoided for scientific imaging.
One test would be to measure the Equivalent Width of the lines in the two spectra. (Perhaps better done on a reference star spectrum with relatively sharp lines and good SNR) This should be invariant under any procedure (Indeed it should be independent of the instrument) ie as the maximum intensity in the line increases, the width of the line should decrease to maintain the area under the line.

(Note that variations in resolution along the spectrum are to be expected with the simple grating in the converging beam setup. The focus varies due to field curvature so only one wavelength can be in focus and chromatic coma increases with wavelength so that for example if the short wavelength end is in best focus, features towards the red end will be increasingly blurred.)

There is a case for using drizzle in undersampled slitless spectroscopy however eg as used in the WFC grism spectrographs on the WFC on the HST
http://adsabs.harvard.edu/abs/2006hstc.conf...85K
and planned for the JWST
https://ntrs.nasa.gov/archive/nasa/c...0150023515.pdf

Cheers
Robin
Reply With Quote
  #26  
Old 11-03-2017, 01:24 PM
sjastro's Avatar
sjastro
Registered User

sjastro is offline
 
Join Date: Jun 2007
Posts: 2,926
Things are starting to make sense

An issue I have found with astro imaging software is that it is not designed to align and stack spectra efficiently.

I needed to align and combine (normal and drizzle) the two separate datasets in PixInsight followed by combining each dataset in Registax.
By tweaking the alignment settings in PixInsight, I am now able to align and combine the individual images in one hit.

I now find there is no appreciable difference between normal and drizzle combined images which is now consistent with the fact that there is no undersampling as seen in the attachment.

It appears when I saved the combined image in Registax, the software automatically stretched the saved image.
Somehow Registax was able to selectively stretch the combined drizzle image.
There is no real science in the drizzled image.

Now that I am able drizzle combine all images in PixInsight, the next step to use my high quality 300mm Pentax lens on the ST-10XME which definitely produces undersampled images.

Steven
Attached Thumbnails
Click for full-size image (Comparison_latest.jpg)
83.0 KB10 views
Reply With Quote
  #27  
Old 11-03-2017, 02:09 PM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,906
For those members not familiar with the drizzle techniques, I'd refer them to:
http://www.astrosurf.com/buil/us/spe9/lrgb22.htm

It is an image processing tool developed to improve some of the undersampled the Hubble images. (Undersampling occurs when the signal data is not sampled by at least two pixels - the Nyquist sampling.)

What Steven has found is that the science data in his spectral images was not enhanced by the drizzle technique.

In spectroscopy we are recommended never to process the raw data beyond the basic needs of darks/ flats/ sky background removal. This ensures the integrity and accuracy of the data. The best solution to improve the data is to collect more! The higher the signal, the higher the signal to noise.

Effective stacking of spectral subs can be done through various freeware processing software - BASS Project, ISIS etc or some astrophotographic programs which have spectroscopy stacking functions - AstroArt6

It's been an interesting investigation, and I thank Steven for his interest and contributions.
Onwards and Upwards
Reply With Quote
  #28  
Old 12-03-2017, 12:57 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Steven,
Quote:
Originally Posted by sjastro View Post
It appears when I saved the combined image in Registax, the software automatically stretched the saved image.
Somehow Registax was able to selectively stretch the combined drizzle image.
There is no real science in the drizzled image.
Ah OK. I am glad that the drizzle part appears to be working as expected though (ie no enhancement where there is no undersampling)

Quote:
Now that I am able drizzle combine all images in PixInsight, the next step to use my high quality 300mm Pentax lens on the ST-10XME which definitely produces undersampled images.
Yes in this situation some form of stacking of dithered images is useful to suppress ripple artifacts in the spectrum due to uneven pixel coverage, for example where spectra are not exactly horizontal and particularly where one shot colour cameras are used as in this case 3 out of 4 pixels are effectively dead in some regions of the spectrum. You can see this effect on one of my spectra here for example taken using the objective grating configuration (bottom of the page)
http://www.threehillsobservatory.co....oscopy_11a.htm
It would be interesting to see what effect drizzling has in this situation.

Cheers
Robin
Reply With Quote
  #29  
Old 28-03-2017, 08:58 PM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Steven,
Quote:
Originally Posted by sjastro View Post
I ended up creating a response curve based on the QE data of the KAF-3200ME chip used in my CCD.
This seems to be giving a more consistent result than deriving the curve from an existing spectrum combined with professional data.

The attachment shows the comparison response curves.

Regards
Sorry for the thread revival but unfortunately you cannot obtain the instrument response this way as the camera response is only part of it. The grating response is just as significant and the atmospheric extinction also makes a contribution. (Even the telescope optics make a small contribution) The only practical way to include all these is to measure a standard star with a known spectrum under the same conditions as the target. When done correctly, it does work though as these examples of using a Star Analyser and ALPY spectrograph show.
http://www.threehillsobservatory.co....roscopy_21.htm

Cheers
Robin
Reply With Quote
  #30  
Old 29-03-2017, 02:18 PM
sjastro's Avatar
sjastro
Registered User

sjastro is offline
 
Join Date: Jun 2007
Posts: 2,926
Quote:
Originally Posted by robin_astro View Post
Hi Steven,


Sorry for the thread revival but unfortunately you cannot obtain the instrument response this way as the camera response is only part of it. The grating response is just as significant and the atmospheric extinction also makes a contribution. (Even the telescope optics make a small contribution) The only practical way to include all these is to measure a standard star with a known spectrum under the same conditions as the target. When done correctly, it does work though as these examples of using a Star Analyser and ALPY spectrograph show.
http://www.threehillsobservatory.co....roscopy_21.htm

Cheers
Robin
Robin,

No issues at all about reviving the thread.
I am still very much a beginner and the feed back provided is part of the learning process.

I agree that relying purely on camera response as an instrument correction isn't foolproof, the major problem is the Planck curves are not accurate.
The problem I have is the Rspec video method (which I assume is a universal method) where one uses an instrument response calibrated professional spectrum as a starting point for determining the response of one's own setup.
I find using this method that details which are muted and insignificant in the uncorrected spectra below 400nm in particular, become magnified and unrealistically detailed when instrument correction is applied.
This problem seems to be due to the much higher sensitivity of professional CCDs in the shorter wavelength range.
This characteristic seems to have been "carried over" into the instrument correction of my own setup.
No such problem exists when the instrument correction is based only on the QE properties of one's own CCD.
Of course I may be doing something wrong that deviates from the video but it isn't obvious.

The other issue that arises is the effects of atmospheric extinction when long exposures are required.
Being relatively faint 3C-273 required lengthy exposures in which the elevation of the object changed markedly during the imaging session.
Is there a limit defined for the elevation change when one requires a different star to produce a new instrument response curve?
If so this will define the maximum time for a subexposure.

Regards

Steven
Reply With Quote
  #31  
Old 30-03-2017, 01:01 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Steven,

Sorry, this is a very long post but these issues come up with most people new to astronomical spectroscopy and the Star Analyser so I have tried to be as comprehensive as possible.

Forget Planck curves. Despite what you may read, stars are nowhere near being black bodies and the effective temperature Teff usually quoted is a theoretical construct which has little to do with the actual surface temperature of the star, if such a thing could even be defined. This diagram in Wikipedia nicely shows the problem using Vega for example
https://commons.wikimedia.org/wiki/F...comparison.png

If using the camera response gives you roughly the right answer this is purely luck, due to the fact that the response of the grating if roughly similar to that of the camera. To be honest you might as well have sketched out an instrument response freehand and used that. Here are some examples of the typical efficiency curve for a transmission grating here.
http://www.optometrics.com//App_Them...s_t_graph1.gif
The Star Analyser will be similar. The response excluding the atmosphere and other optical components in the system will be very roughly this x the CCD response.


Working with such a simple slitless system like the Star Analyser means we have to accept some limitations but the document I mentioned on my website
http://www.threehillsobservatory.co....t_response.pdf
gives a good idea of what you should be able to achieve in terms of response correction on bright targets using a monochrome astro camera and a Star Analyser. It does take practice though so I recommend beginners choose a few known reference stars to hone their skills on in the same way as I described there. (Note faint objects are tougher to get an accurate instrument response with slitless systems, mainly due to difficulties in subtracting the sky background accurately, which can be large compared with the signal, particularly at the blue end.)

The method of using a reference star to measure the instrument response (hot A or B types are chosen as they are relatively line free so it is easier to divide them and eliminate any remaining artifacts from residuals from the lines) is the same as used by professionals, except that they usually measure the instrument response and atmospheric extinction separately. To avoid that complication amateurs generally use a reference star at similar elevation. (Provided you are observing above ~40 deg, within 10deg altitude is good enough to get a reasonable accuracy down to 3800A which is as far is you can reasonably expect to go with a Star Analyser given the camera and grating response. At higher elevations you can be further away but at lower elevations you need to be closer in altitude. If you want to explore this further I can suggest for example Christian Buil's website. (In French but Google for example translates well)
http://www.astrosurf.com/buil/extinction/calcul.htm
http://www.astrosurf.com/buil/atmosp...ansmission.htm


I am not an RSpec user but you can see the steps I used to achieve the results shown using ISIS and Vspec. in the presentation "Low Resolution Slitless Spectroscopy (Star Analyser)- Observing a fast transient of a T Tauri star." Particularly slides 5-33 downloadable from this page
http://www.threehillsobservatory.co....roscopy_10.htm
The procedure will be similar in Rspec.

Hints and tips:

1. When making the observation, plan where your spectrum will lie relative to other stars and spectra to avoid cross contamination between star spectra. Rotate the camera plus grating if necessary to avoid this.

2. It is better to use if possible reference stars with an actual measured professional spectrum eg MILES stars. (The Pickles stars in the Vspec and Rspec libraries are generic of a particular spectral type and can differ from the actual spectrum of a particular star due to interstellar extiction, metallicity or inaccuracies in spectral classification for example)

3. Make sure you make your binning zone wide enough to include all the spectrum signal (turn up the gain in the image) Don't make it too wide though otherwise you just add extra noise

4. Make sure the zones for background subtraction above and below the spectrum are close to the measured spectrum but not so close that there is contamination from the target spectrum (turn up the gain in the image) The zones also need to be free of contaminating stars and spectra. This is particularly important in regions of the spectrum where the signal may be low eg the blue where the instruments response is low and small errors in background subtraction can give large errors in the final spectrum.

5. After wavelength calibration crop your spectrum to 3800-7800A. ( Below 3800A the sensitivity is too low to give reliable data and above 7800A you risk contamination from the second order spectrum.)

6. Rescale your measured spectrum to approximately 1 on average and filter your library reference spectrum to roughly match the resolution of your measured version. This makes the division easier and the resulting raw spectral response more accurate. Don't be afraid to wavelength shift your measured spectrum slightly if necessary to bring the lines in the two spectra into line

7. After division, remove any remaining artifacts where the lines have not divided accurately before smoothing the result This is particularly tricky in the blue region where the H Balmer lines crowd together and merge. (I normally remove the telluric lines at the red end too so they do not appear in the response and will therefore remain in the measured spectra but that is a personal choice)

8. The smoothing stage is critical. You are aiming to smooth out any remaining noise artifacts while still keeping the underlying instrument response. Take care not to over smooth. This is a common beginners fault. In particular look carefully at how well the smoothed curve fits the raw curve at the blue end. The instrument response is low here so what look like insignificant errors become magnified. Also take care not to smooth out broad ripples which may actually be in the camera response. (Kodak KAF and some CMOS sensors can be particularly prone to these)

9. The first acid test of a good instrument response is to apply it to the measured reference spectrum. The result should of course ideally exactly match the library version. If it does not, look closely at where it differs, try to understand why and if necessary redo the response calculation.

10. Check your skills by measuring other stars with known spectra and differing spectral types and correcting them with your instrument response. The results should be close to the library version for all stars.
Reply With Quote
  #32  
Old 30-03-2017, 04:07 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
I forgot to mention the potential effect of flat defects on the calculation of instrument response. Spectroscopic flats contain both spatial and spectroscopic information both of which can be corrected for with slit spectrographs but this becomes incredibly complex to do for for slitless systems and is impractical for the amateur. I therefore recommend taking a flat image to check for any obvious local defects and physically removing them if possible (eg dust donuts) and avoiding placing the spectra in these area if not. The target and reference star spectra should then be placed as far as possible at the same position on the camera sensor so that more large scale defects such as vignetting etc cancel out as far as possible. (You can test for the potential effects of flat defects by taking spectra at different locations around the camera field of view and comparing them.

All this is only important if you are looking to squeeze the most out of the data though. The strength of the simple Star Analyser and why I developed it is that it can reveal the obvious features in the spectra of many objects with minimum effort without worrying about any of this !

Cheers
Robin
Reply With Quote
  #33  
Old 30-03-2017, 02:06 PM
sjastro's Avatar
sjastro
Registered User

sjastro is offline
 
Join Date: Jun 2007
Posts: 2,926
Hello Robin,

Quote:
Originally Posted by robin_astro View Post
Hi Steven,

Forget Planck curves. Despite what you may read, stars are nowhere near being black bodies and the effective temperature Teff usually quoted is a theoretical construct which has little to do with the actual surface temperature of the star, if such a thing could even be defined. This diagram in Wikipedia nicely shows the problem using Vega for example
https://commons.wikimedia.org/wiki/F...comparison.png
A stellar photosphere is not an ideal blackbody due to the temperature gradient across the thickness of the photosphere but is close enough to be modelled as one.
Mathematical modelling is basically the science of approximations.

The spectrum of Vega taken with a professional CCD with high sensitivity in UV shows lines that extend well into the EUV range.
These lines are probably associated with Vega's corona.
Since stellar coronas are very thin they are not black bodies and cannot be modelled as such.
Since the Vega spectrum contains elements that are not a blackbody in nature, it's not uprising the overall spectrum doesn't resemble a blackbody.

The spectrum of Vega using a typical amateur CCD with zero response below 300nm on the other hand does resemble a blackbody.

Quote:
If using the camera response gives you roughly the right answer this is purely luck, due to the fact that the response of the grating if roughly similar to that of the camera. To be honest you might as well have sketched out an instrument response freehand and used that. Here are some examples of the typical efficiency curve for a transmission grating here.
http://www.optometrics.com//App_Them...s_t_graph1.gif
The Star Analyser will be similar. The response excluding the atmosphere and other optical components in the system will be very roughly this x the CCD response.
I'd be very interested to see the information on the Star Analyser 200 if available. Combining the responses of the KAF-3200ME chip and the Star Analyser 200 might be an interesting exercise even if doesn't address atmospheric extinction.

Quote:
I am not an RSpec user but you can see the steps I used to achieve the results shown using ISIS and Vspec. in the presentation "Low Resolution Slitless Spectroscopy (Star Analyser)- Observing a fast transient of a T Tauri star." Particularly slides 5-33 downloadable from this page
http://www.threehillsobservatory.co....roscopy_10.htm
The procedure will be similar in Rspec.
Slide 31 is a good illustration of the issue I described in a previous post.
In the instrument corrected spectrum there is now a wealth of detail below 4200 A which is not in the raw data.
This doesn't look right to me and resembles processing artefacts.

Quote:
Hints and tips:

1. When making the observation, plan where your spectrum will lie relative to other stars and spectra to avoid cross contamination between star spectra. Rotate the camera plus grating if necessary to avoid this.

2. It is better to use if possible reference stars with an actual measured professional spectrum eg MILES stars. (The Pickles stars in the Vspec and Rspec libraries are generic of a particular spectral type and can differ from the actual spectrum of a particular star due to interstellar extiction, metallicity or inaccuracies in spectral classification for example)

3. Make sure you make your binning zone wide enough to include all the spectrum signal (turn up the gain in the image) Don't make it too wide though otherwise you just add extra noise

4. Make sure the zones for background subtraction above and below the spectrum are close to the measured spectrum but not so close that there is contamination from the target spectrum (turn up the gain in the image) The zones also need to be free of contaminating stars and spectra. This is particularly important in regions of the spectrum where the signal may be low eg the blue where the instruments response is low and small errors in background subtraction can give large errors in the final spectrum.

5. After wavelength calibration crop your spectrum to 3800-7800A. ( Below 3800A the sensitivity is too low to give reliable data and above 7800A you risk contamination from the second order spectrum.)

6. Rescale your measured spectrum to approximately 1 on average and filter your library reference spectrum to roughly match the resolution of your measured version. This makes the division easier and the resulting raw spectral response more accurate. Don't be afraid to wavelength shift your measured spectrum slightly if necessary to bring the lines in the two spectra into line

7. After division, remove any remaining artifacts where the lines have not divided accurately before smoothing the result This is particularly tricky in the blue region where the H Balmer lines crowd together and merge. (I normally remove the telluric lines at the red end too so they do not appear in the response and will therefore remain in the measured spectra but that is a personal choice)

8. The smoothing stage is critical. You are aiming to smooth out any remaining noise artifacts while still keeping the underlying instrument response. Take care not to over smooth. This is a common beginners fault. In particular look carefully at how well the smoothed curve fits the raw curve at the blue end. The instrument response is low here so what look like insignificant errors become magnified. Also take care not to smooth out broad ripples which may actually be in the camera response. (Kodak KAF and some CMOS sensors can be particularly prone to these)

9. The first acid test of a good instrument response is to apply it to the measured reference spectrum. The result should of course ideally exactly match the library version. If it does not, look closely at where it differs, try to understand why and if necessary redo the response calculation.

10. Check your skills by measuring other stars with known spectra and differing spectral types and correcting them with your instrument response. The results should be close to the library version for all stars.
You are a wealth of information Robin.
I'll have to look closely at BASS, Rspec seems to be deficient in some areas.

Regards

Steven

Last edited by sjastro; 30-03-2017 at 06:18 PM. Reason: spelling
Reply With Quote
  #34  
Old 30-03-2017, 03:38 PM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,906
Steven,
I think you'll find the 200 l/mm gratings are very similar - see this test report:
https://spectroscopy.wordpress.com/2...ssion-grating/

Hmmm, Vega Planck curve - not really....
Yes, you have to use common sense and work away from the Balmer discontinuity - K. Robinson in his "Starlight-An introduction to Stellar Physics for amateurs" gives a very good explanation of the issues/ problems of a Planck curve fitment. (His other book "Spectroscopy - Key to the stars" is also recommended.)

IMHO I'd crop the spectrum below 385nm for the IR curve....safest way to start is with an A type spectrum which is easy to analyse and process.
(BASS Project contains a vast collection of element lines to assist analysis as well as the Pickles and Miles reference star database.)
Reply With Quote
  #35  
Old 30-03-2017, 11:59 PM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Ken,
Quote:
Originally Posted by Merlin66 View Post
Hmmm, Vega Planck curve - not really....
Yes, you have to use common sense and work away from the Balmer discontinuity -
This still does not give the "right" answer though even above the Balmer step. You routinely see claims that you can measure a star temperature (Teff) from the shape of the spectrum continuum. The problem is that the effective temperature Teff of a star is not based on the spectrum continuum (or even the surface temperature) of the star at all. It is a theoretical value defined as the temperature of a black body the same size and total luminosity of the star. For Vega this works out at 9500K. The shape of the continuum redwards of the Balmer step though roughly matches a Planck curve with temperature of 15000K. As well as H ionisation beyond the Balmer step, there are many other processes going on which affect the shape of the continuum. (Not forgetting the potential effects of interstellar extinction which makes stars look cooler than they actually are) All you can get from the continuum shape is a rough trend of temperature with spectral type.

Cheers
Robin
Reply With Quote
  #36  
Old 31-03-2017, 12:42 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Steven,
Quote:
Originally Posted by sjastro View Post
The spectrum of Vega using a typical amateur CCD with zero response below 300nm on the other hand does resemble a blackbody.
Not it does not. It only resembles a black body (though not matching Teff) above the Balmer jump at 3650A but diverges significantly below this as is clear in many amateur spectra eg the first one here which is the same exercise but with a slit spectrograph
http://www.threehillsobservatory.co....ence_stars.pdf


Working below ~3800A with slitless systems like the Star Analyser though is difficult due to the very low signal levels relative to the background.
Quote:
Slide 31 is a good illustration of the issue I described in a previous post.
In the instrument corrected spectrum there is now a wealth of detail below 4200 A which is not in the raw data.
This doesn't look right to me and resembles processing artefacts.
No this is all good data. (The Balmer series which eventually blend together at the low resolution). If you look at the next slide 32 which compares the measured and library versions (The acid test for a good instrument response which I mentioned in my previous post) you can see good correspondence all the way down to 3800A. The Balmer lines in this region are not obvious in the raw spectrum because they lie on a steep downward slope of the instrument response so just appear as inflections.

RSpec is mainly for beginners and is not the best software for precision work (I use ISIS, sometimes combined with Visual Spec) The technique is tried and tested though and with the pointers in my post can produce a good instrument response with your equipment down to 3800A.

Cheers
Robin
Reply With Quote
  #37  
Old 31-03-2017, 12:52 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
BTW - off topic but we have been treated to some views of your wonderful southern skies here in the UK over the past couple of days thanks to the annual BBC "Stargazing Live" series of programmes which this year is coming from Siding Springs.
http://www.bbc.co.uk/programmes/b019h4g8
Reply With Quote
  #38  
Old 31-03-2017, 10:48 AM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,906
I don't strongly disagree with the accuracy (or otherwise) of the generated Planck curves from amateur spectral images.
It can be very difficult to achieve an "right" answer, but the exercise is still of interest and some value when starting out - "Spectroscopy 101"
The attached curves show the continuum differences we're talking about.
Attached Thumbnails
Click for full-size image (starlight_fig9.jpg)
82.9 KB18 views
Reply With Quote
  #39  
Old 31-03-2017, 07:06 PM
sjastro's Avatar
sjastro
Registered User

sjastro is offline
 
Join Date: Jun 2007
Posts: 2,926
Quote:
Originally Posted by Merlin66 View Post
Steven,
I think you'll find the 200 l/mm gratings are very similar - see this test report:
https://spectroscopy.wordpress.com/2...ssion-grating/
Thanks Ken,

Creating a instrument response curve using only the CCD and Grating responses looks promising.
Since photon count is reduced by both the CCD and Grating responses the combined effect is multiplicative, the overall response is obtained by multiplying the separate CCD and Grating responses in Rspec.

In the first attachment is the comparison of raw and CCD/Grating response for Sirius.
Note there is no sawtooth effect in the H-Delta-H-Gamma region of the corrected image, only a relatively slight change in the FWHM.
If the Grating data included information at 300nm, I strongly suspect the spectrum would have been OK at 300nm without the need of truncating the data.

The second and third attachments compare the combined CCD/Grating response with an A0V library star and a CCD only response with an A0V star respectively.
There was no A1V star for direct comparison and there is no correction for extinction.
Despite this the CCD/Grating response provides a well corrected spectrum, the CCD only response does not.

I will experiment with stars of other spectra to see if the results are reproducible.

Steven
Attached Thumbnails
Click for full-size image (Sirius Correction vs No Correction.jpg)
63.2 KB9 views
Click for full-size image (Sirius Corrected by CCD and Grating Response.jpg)
76.9 KB11 views
Click for full-size image (Sirius Corrected by CCD  Response Only.jpg)
66.8 KB9 views
Reply With Quote
  #40  
Old 01-04-2017, 02:23 AM
robin_astro
Registered User

robin_astro is offline
 
Join Date: Jun 2009
Location: UK
Posts: 73
Hi Ken,
Quote:
Originally Posted by Merlin66 View Post
I don't strongly disagree with the accuracy (or otherwise) of the generated Planck curves from amateur spectral images.
It can be very difficult to achieve an "right" answer, but the exercise is still of interest and some value when starting out - "Spectroscopy 101"
The attached curves show the continuum differences we're talking about.
It not that it is difficult to do, it is just that fitting a Planck curve does not give the expected answer for fundamental reasons. This then confuses the beginner. The posted diagram is a case in point. It is often seen but is fundamentally incorrect (or at best misleading). The spectrum labelled 8000K for example is clearly an A star (~A5) which would have a Teff of 8000K. If you try to fit a Planck curve to it though you find 8000K is nowhere near and the best fit is at a much higher temperature of ~11500K. See attached.

This is why I just show beginners the qualitative difference between hot and cool star continuum shape and then swiftly move on to the features which really show the physical properties of the star, including temperature

Robin
Attached Thumbnails
Click for full-size image (A5v_Planck_fit.png)
11.7 KB8 views
Reply With Quote
Reply

Bookmarks


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 04:18 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement