Log in

View Full Version here: : The future of Amateur Astronomy: what will be possible ?


skysurfer
21-02-2014, 01:06 AM
Since the last 50 years Astrophotography has made large steps.
In the 80s and 90s I was still peering 10 minutes through a guiding eyepiece while tracking an image of an object using Fujicolor 400, and now I take 30 10 seconds exposures which I stack using the same Televue Genesis.

Many amateurs have the same experience.

Sensor technology has made lots of improvements. 6400 asa is now becoming commonplace with fullframe cameras.

What will it be in the 2020s or 30s ?
100k asa or even a million ?

I think the SLR has extinct by that time and completely replaced by mirrorless cameras with liveview which shows instantly the sky as if it were exposed 5 mins at 1600 asa by standards of now.

This will allow active observing goggles which have a live view camera in it which show the sky live but amplified at 30 fps ? In urban skies equipped with improved Hutech filters ?

The same equipment easing daylight viewing planets ?

And lightweight travel scopes such as a 10cm apo of only 1kg (or 20cm truss apos of 8kg, or 50cm compact Dobsons with tracking platforms of only 15kg ?).

tonybarry
21-02-2014, 07:03 AM
There is a fairly hard physical limit to cameras - the quantum efficiency can't be more than 1.0, so you will still need glass (mirrors / lenses) to collect light, and the fainter the object, the more glass you will need. Current DSLRs are around 5 -10% QE (but getting better all the time) and the specialised astro cams approach 70%. The return on investment of going from 70% to 90%+ is not going to be eye-popping.

Noise figures are dropping all the time; the read noise of the KAF8300 is around 8e- and the dark current is proportional to sensor temperature. Christian Buil has an excellent website with camera QE and noise figures listed.

http://www.astrosurf.com/buil/isis/noise/result.htm

The mount and the glass will still figure prominently in the years to come. What will improve is the cost of good silicon to get images.

Regards,
Tony Barry

pdalek
23-02-2014, 12:34 AM
I think light pollution will kill amateur astronomy (using own scope) within 20-30 years.
Time share on dark sited instruments is an attractive alternative.
I would really like time share on a retired 2m scope with a big sensor - aperture + plate scale beats any new camera performance gains.

Peter Ward
23-02-2014, 10:55 AM
While I have a lot of respect for Christian's work, I take issue with his derivation of the read noise in the STX16803... which is nearly double both the manufacturer's spec and that derived elsewhere.

In short I don't think it's correct.

alpal
23-02-2014, 12:22 PM
The professionals are getting too far ahead of us.
How about a 39 meter wide mirror?

http://www.youtube.com/watch?v=3wOFAkggSiU

gregbradley
23-02-2014, 12:29 PM
Neither is 5-10% QE for DSLRs. They are way higher than that.

Roger Clark has a table for many DSLRs. Typically they range from 22% to 59% at the high end with many between 22and 35%. Sony Exmor 36.4mp is closer to 59% and I would say having used many sensors that would not be roughly correct.

It seems the bulk of research and advance is in CMOS sensors for cameras.

Bayerless Foveon style 54mp full frame sensor from Sony is likely to arrive in a year. Fuji is working on releasing an organic sensor with more dynamic range than other existing sensors. Its all happening in this arena whilst CCD - yawn, nothing much is happening.

Greg.

Greg

PRejto
23-02-2014, 02:01 PM
I cannot add anything to the current discussion except to add that most every prediction of "the future" has proven to be incorrect. One only needs to read past articles in Popular Science, Popular Mechanics, etc to see how wrong we all get it no matter how hard we try. The really big advances seem to come unexpectedly from left field and change everything.

Peter

Shiraz
23-02-2014, 04:33 PM
as others have pointed out, there is not much more that is physically possible with detector sensitivity - don't expect much more than fractional advances, it can't happen.

As I see it, the final game changing advance in detector technology will be the development of affordable really-low-read-noise chips. these will allow:

- lucky imaging of brighter DSOs. This will give around an order of magnitude improvement in resolution with bigger scopes, as it will cut through atmospheric seeing. The resolution enjoyed by planetary imagers will be available for brighter DSOs. The requirement here will be fairly small pixels or Barlow lenses to take advantage of the fleeting bursts of high quality seeing. The big downside will be that this will unmask the previously hidden faults in telescopes - systems designed to image well enough at 1-2 arcsec resolution (seeing limited) will not necessarily work so effectively at the diffraction limit (eg 0.3 arcsec).

- high quality imaging with lower precision mounts. There will no longer be a need to use 10 minute subs for broadband imaging - large numbers of subs a few seconds long (or even less than 1 second for brighter objects) should be effective with read noise in the vicinity of 1 electron. The whole paraphenalia of hugely expensive precision mounts, OAG/ONAGs (or any guiding systems for that matter), adaptive optics etc. will be largely unnecessary, since all you will need is a mount that can keep an image stable for very short periods of time - the stacking software can compensate for longer term drifts such as periodic error or image rotation if on an altaz mount.

I don't think that this is a pipe dream - there are at least 3 different low read noise technologies out there, but none is currently anything like affordable for a hobbyist. There will be hidden gotchas, but even so, low read noise chips could be revolutionary.

Peter Ward
23-02-2014, 08:03 PM
Actually Orthogonal transfer & Skipper CCD's have performance that makes CMOS look very lame for astronomical imaging applications. ( the latter has *sub-electron* read noise :eyepop: )

Thinned CCD's now have QE's well over 90% from 450 to 700nm.

Problem is: these are specialised/cutting edge technologies that are unlikely to make it into any Canon or Nikon camera anytime soon.

And then there is the cost.....:shrug:

Shiraz
23-02-2014, 09:39 PM
Already here Peter.

Sony Exmor R is back illuminated (thinned) CMOS - I think that it's already in all sorts of devices at consumer-level pricing. http://www.sony.net/Products/SC-HP/cx_news/vol59/pdf/featuring_Exmorr.pdf

Peter Ward
23-02-2014, 09:57 PM
Really? I was under the impression these were back illuminated only.

Thinning makes sensors very fragile... (and destroys a few with the process) I was not aware Sony did...or would even want to do this.

Shiraz
23-02-2014, 11:09 PM
Looks like they thin them down to 8 microns - going by what they say in the article I linked to:

This meant that it would
not be possible to form the on-chip lenses and
color filters over 1 μm pixels. To resolve this
issue, Sony developed a new wafer thinning
technology to assure that such distortion and
warping does not occur.

Peter Ward
23-02-2014, 11:58 PM
Having had a bit of a read elsewhere, I suspect something was lost in translation at Sony....while thinning and back illumination have been synonymous they are not the same thing.

Emor R sensors have their photodiodes below the lenslet array but above the wiring architecture..... very clever by the way....but as far as I can tell they are not "thinned" in the conventional sense....what Sony have done is reverse the stack....but called a "thinned" sensor as above.

The CMOS stack reversal is consistent with their press release here:

“Sony has succeeded in establishing a structure that layers the pixel section containing formations of back-illuminated structure pixels over the chip affixed with mounted circuits for signal processing, which is in place of supporting substrates used for conventional back-illuminated CMOS image sensors,” (I note there is no mention of ablation of the silicon substrate)

Could also be a case of they aren't telling anyone exactly what they do due commercial sensitivities :)

Shiraz
24-02-2014, 12:05 AM
seems pretty clear to me - from the article originally linked to:

While the
silicon substrate itself is about 600 to 800 μm
thick, in the Sony “Exmor R” CMOS image
sensors, it was necessary to make of the silicon
substrate, including the metal wiring layer,
have a thickness of about 8 μm to allow light
to be received through the back of the substrate.
....
This meant that it would
not be possible to form the on-chip lenses and
color filters over 1 μm pixels. To resolve this
issue, Sony developed a new wafer thinning
technology to assure that such distortion and
warping does not occur.

In any case, the devices are back illuminated, which is what really matters - that's where the gain in performance comes from. If Sony managed to do this without thinning, then good luck to them, but they certainly claim that they do thin the chips to a very fine structure and they claim to have developed a superior thinning technology to boot.

We live in interesting times :)

Shiraz
24-02-2014, 09:18 AM
Getting back to the original post, the attached white paper outlines a method for significantly increasing the sensitivity of colour sensors by replacing the RGB Bayer array. It discusses why others have failed and the implication is that the Aptina LRB technology can use advanced signal processing to provide significant sensitivity gain without the dynamic range and colour penalties of previous attempts.

If it works as suggested and finds its way into larger pixel devices, this type of technology could add one stop of sensitivity to DSLRs and one shot colour cameras for astro. Couple that with back illumination and there might be about 2 stops total to go before reaching the hard physical limit on the sensitivity of colour cameras.

http://www.aptina.com/Aptina_ClarityPlus_WhitePaper.pdf

If you are interested in how much CMOS sensor technology has advanced recently and where it might be going, the Aptina site has some good reads.

avandonk
24-02-2014, 10:38 AM
Nonsense! In the future our quantum computers will merely sample the information on the surface of our Universe to reconstruct any object.

We are all just a projection from the two dimensional information surface that encloses our Universe.

If the latest string theory is correct our whole view of the Universe will change.

See this talk

http://www.youtube.com/watch?v=Op8ZXIzlAwY (http://www.youtube.com/watch?v=Op8ZXIzlAwY)

It is esoteric and difficult but it is a future!

Do a search on 'holographic universe'.

Bert

Peter Ward
24-02-2014, 12:44 PM
Indeed.... I read it as they have a thin substrate rather than having actually "thinned" it...never-the-less a moot point at best :lol:..... Now if they could just get over the race to teeny pixels....and go for uber pixels like Canon:

http://www.imaging-resource.com/news/2013/09/13/canons-new-high-sensitivity-full-frame-sensor-captures-tiny-details-of-fire

pdalek
27-02-2014, 02:23 PM
The new organic sensors
http://www.fujifilm.com/news/n130611.html
could be released in cameras next year.

A paper on how they work
http://www.imagesensors.org/Past%20Workshops/2013%20Workshop/2013%20Papers/09-3_012-Ihama_paper.pdf

Many companies are developing similar technology
http://www.nikkoia.com/news-fr/idtechex-coup-de-projecteur-sur-nikkoia/

Shiraz
27-02-2014, 04:07 PM
Very interesting Patrick, thanks for posting. The extra dynamic range could be very useful for astro purposes.