Log in

View Full Version here: : Chat GPT - Open AI, the future of Astronomy


glend
01-02-2023, 01:11 AM
Over the past few months, since the release of Open AI with CHAT GPT, I have watched countless examples of Python code generating art work, composing music, writing stories, and making the big leap into the provision of answers, outputs, and findings, which are significant jumps over what now seems like rudimentary software of now obsolete applications, including Google Et al.

The question I want to pose to the community here is:

Where is amateur astronomy going, in a new world order where AI is capable of finding anything in the sky, capturing it, rendering it, and displaying it, almost instantly compared to techniques which we might be using today?

:question:

raymo
01-02-2023, 02:05 AM
Over the next few decades humans will be needed less and less in virtually every field of endeavour that we are currently involved in. This transition has of course been under way for quite some time.
A good illustration of this is the quality of the images now being posted by
newbies in the field of AP. The massive proliferation of software, and to a lesser extent, hardware, is allowing far easier focussing, polar alignment,
processing, etc: etc: This means that if you have the budget and are computer literate, you can eliminate a large percentage of the learning curve. Shortly, processing will be carried out by the software
if the imager so chooses, leaving the human contribution to AP being little
more than the setting up taking down of his/her rig. I find it very sad that the satisfaction gained from the hard work involved in producing an image, and hopefully later on producing improved versions, will be lost.
We of course already have electronic scopes that produce passable images while we wait, and they will only improve.
I think that amateur astronomy, at least as we know it, will cease to exist
in a decade or two.
raymo

Startrek
01-02-2023, 07:24 AM
Glen ,

Thanks for raising this extremely important topic

In reference to BlurXterminator ( AI based ) ……..

At least the developer of Startools, my post processing software, has maintained his stance on keeping Astrophotography post processing “real” not “fake”

Below is a link I posted a few weeks back , it’s long but worth a read


https://forum.startools.org/viewtopic.php?t=2746

PS: Raymo , you’ve probably seen more changes over the past + 60 years in Astronomy than anyone on this forum and I guess it may sadden you to some extent. But I’m sure this “fake” stuff won’t last the test of time and hopefully in a couple of decades we will still be enjoying the night sky like we do now

Cheers
Martin

N1
01-02-2023, 08:11 AM
Many moons ago, I did some courses in the field of humanities. "What does one do with that?" was a question I often heard, hinting at employment prospects (or lack thereof). My answer was, "it's the last thing to be automated, where humans are rendered obsolete". I would say the same thing today.

I wonder how good AI will ever be at thinking outside the square/ lateral thinking, a process which leads to a lot of scientific discovery and advancement.

Recreational AP will still have its place no matter how good AI based processing becomes, just like visual astronomy.

dikman
01-02-2023, 08:42 AM
Being able to reach conclusions based on minimal information and using illogical reasoning has always been one of the strongpoints of the human mind.:)

AdamJL
01-02-2023, 09:26 AM
^ I think that’s called “jumping to conclusions” in ordinary lexicon and I agree with you completely.

Me, I’m super excited for an AI future. Loving what the AI based tools are doing with AP. Long may it continue!

glend
01-02-2023, 09:41 AM
Illogical reasoning is also one if the biggest problems with human minds, based on historical outcomes. And logical to whom? Putin, Hitler, etc.

The AI military systems probably have no regard for certain humanity based rules. Here is a good time to think about Asimov's First Law of Robotics.

I think we can all agree that some sort of ethical framework is required for Open AI, however, the world of Dark AI will continue to evolve.

But can we try to avoid straying from the original question re astro impacts. I know it's hard.

sharkbite
01-02-2023, 09:45 AM
Yup...

back in the 90's i was warned not to take my chosen career path in IT, due to my job being fully automated within the next 5 years.....i'm still waiting....

Yes some parts were automated.....the mundane, boring bits.

This got rid of all the lunk heads, and elevated positions like mine to people who could think.

I see AI as an extension of this - getting the machines to do the stuff we are either not good at, or hate
....so that we can get on with more interesting things...

For mine - i got into Astronomy to see pretty things and photograph them - not spend all night setting up kit.....

Sorry if this sounds a bit blunt - i realise my experience is not the same as others...so mileage may vary!

glend
01-02-2023, 09:55 AM
Here is a link to OpenAi rules and in it a link to the Open AI community forum.

https://community.openai.com/guidelines

The_bluester
01-02-2023, 11:36 AM
I think AI in processing astrophotography is likely to be the equivalent for some time of CHAT GPT.

By that I mean they have already shown that CHAT GPT is quite capable of producing an essay to argue a point for instance, that reads well, hangs together grammatically, sounds authoritative but in terms of the subject matter is completely wrong. I was listening to talk on the radio about it and they were talking about in education settings, possibly having to go to oral exams when they have doubts, if a student can go feed a few text prompts at an AI program and get back a well-structured essay that is contextually right (And surely eventually they will) then how do you work out if the student actually understands the subject matter except for giving them a grilling face to face.

The AP equivalent is some of the AI tools which simply features where none exist, they look great, look like details but they come from an unknowable association in "Black box" software. Make sure it has the right training set or the Vela SNR suddenly looks like the "Wall of noses" in Jenolan caves.
The other side of the coin is that for AP it might well eventually be able to take whatever rubbish data you throw at it, identify the object and massage your data until the "Output" looks like the reference image.

Neither option appeals to me in the least. Star removal by AI like StarXterminator is one thing, and I am fairly certain it is inventing detail as it "Removes" the stars (If the star has saturated the pixels in an area for instance, no amount of extrapolation is anything but a guess at what would be there if the star was not) but my use of that type of tool is generally to remove stars from narrowband images before using them as "Lighten" layers to add the NB to LRGB images, the stars in the LRGB are going to push through and obscure the invented data anyway. I also like NoiseXterminator, but that has to have limits too before it is interpolating away detail that is actually there, having mis-identified it as noise.

TL;DR:

I reckon beyond a certain point (Which we are pretty close to IMO) AI tools in AP will not improve the breed. It is not inconceivable to have an AI set up that can identify objects by plate solving and corral whatever rubbish you throw at it into a good image. But is getting out your pencil and drawing a star field on paper for it to solve, then letting it massage your coffee stains into an image of M42 astrophotography?

glend
01-02-2023, 12:46 PM
Paul, your points assume an already existing body of knowledge on your part, and all of the equipment, software, etc necessary to acquire and process an image. Many others, especially those thinking about getting into astro imaging, or visual observation, will have a learning curve and an investment curve, not to mention the time required.

For a person just developing an interest, Astro AI offers an incredible shortcut in their knowledge building. Face it, just about anything you can image, or observe, is already imaged and stored on the Net somewhere. The tools are already available to view these objects at various focal lengths, in various filter packs, etc in high definition.

For those of us that have gone through the Astro learning and investment curve, i suspect many are as in love with the process as much as the outcome. It is a ritual that we find satisfying. And we spend a lot of time talking here about process.

For New generations, the learning curve will flatten significantly, allowing less obsessive involvement in curating a process, and more time for other things. They will likely not need to buy equipment, worry about the weather, etc, as the AI can give them what they seek.

Sure there is no one Astro AI solution, just a click away today, but people are out there working on it.;)

ReidG
01-02-2023, 01:03 PM
In the world of electronic hardware everything once had to be done with discrete components, and large computers were actually made that way.
Progressively there have been increases in the size and complexity of integrated circuits.
At each stage there have been cries that this will make humans obsolete. What has actually happened is that increasingly complex problems can be solved.
I expect that AI will continue to evolve and improve and become part of our lives. Perhaps Google may have to do a huge and rapid change of direction or perhaps they may simply fade away as many big companies have done in the past.


The fundamental problem facing AI remains that of doing what humans and other animals can do and analyze complex situations they have not been trained to work with and still come to useful conclusions.
ChatGPT if asked for an omelette recipe would probably supply quite a good one but it still has no idea what an omelette is. A school kid would

julianh72
01-02-2023, 01:04 PM
I just posed this request to ChatGPT:

Demonstrate "lateral thinking" to develop an innovative means of capturing astrophotography

This was the response:

One approach to demonstrating lateral thinking in astrophotography might involve rethinking the tools and techniques used for capturing images. Instead of relying on traditional telescopes and cameras, for example, one might experiment with new technologies such as:

Drones: Equipping drones with cameras and flying them at high altitudes could provide a unique perspective for capturing astrophotography images.
Balloon-based cameras: Launching cameras attached to balloons into the stratosphere could also offer new angles and perspectives for capturing images of celestial objects.
Augmented reality: Developing an app that combines data from telescopes and cameras with augmented reality technology could create a new way of experiencing and capturing astrophotography images.
Multi-spectral imaging: Using sensors that can capture images across a range of wavelengths, beyond the visible light that traditional cameras pick up, could reveal previously hidden details in astronomical objects.

Nothing truly earth-shattering there - but some food for thought for hackers and makers?

The_bluester
01-02-2023, 01:44 PM
I think Glen, the biggest problem I have is the conceptual one, what is the point? As you say we are all pretty invested here in equipment, software and knowledge acquisition, to say nothing about aesthetic tastes where everyone is different, and everyone's tastes and preferences evolve over time as well as you learn more.

As a hands on sort of person, I can still see how downloading data to process from Hubble for instance (Or any telescope for hire service around the world) to avoid the expense of the equipment rabbit hole could be very appealing to a lot of people.

Likewise more and more automated "One click" processing software that takes whatever you throw at it and massages it into shape via fair means or foul (Mathematical deconvolution versus AI hallucination techniques as is the current debate)

Where it all falls down for me and (The concept issue) is to use someone else's data and chuck that into inscrutable black box processing software. What is the point? You might as well just go to Astrobin, find an image you like and download that. That is from my "Hands on" perspective. I just can't see how someone who does not aquire the data for themselves and does not do anything much processing wise other than point the black box at the right folder, is going to bother. They might as well use one of the text to image AI's and write "Please make a realistic image of a galaxy as seen from Earth as though through a telescope"

I think the following two posts show the current limitations quite well in terms of AI is currently only as good as the training data. There is no analysis, no actual thinking, just joining the dots between loads and loads of training examples. What does it do if someone comes up with a truly original question or if there is no training data for that, say "Dunno, no one has ever asked that before"?

dikman
01-02-2023, 01:46 PM
Re- drones and balloons, fine for short exposures but for long(ish) exposures stability and a fixed location would seem to be a necessity. Trying to maintain constant, and long, accurate pinpoint tracking with a drone could be an interesting exercise. Not to mention a balloon, which is dependent on the vagaries of any wind (or atmospheric disturbances).

The_bluester
01-02-2023, 02:28 PM
Sort of my meaning exactly about the limitations, at any sort of level you would look at those ideas and decide that they were going to be either impracticable (Drones and balloons) or actually so sensible that they are already done (IR photography anyone?) but it is the kind of "Hey, wouldn't it be cool if we could" thought bubbles that would get pulled out of the mass of training data acquired by more or less backing up the internet.

The third (Augmented reality) is sort of cool, but shows that it just associates, not understands. If you understood the question (Demonstrate "lateral thinking" to develop an innovative means of capturing astrophotography) you would not answer with using AR to display it.

julianh72
01-02-2023, 02:44 PM
I had exactly the same thoughts when I first read the ChatGPT output - but then it occurred to me: NASA has already solved those issues with the SOFIA instrument (infrared observatory mounted in a Boeing 747 - now retired).

Sure - the average hacker / maker doesn't have access to the sort of resources of NASA, but I don't think it would be insurmountable to give it a try. Put a relatively short focal length / wide angle camera onto a gyroscopically stabilised mount (available on commercial hobby drones, for example), and shoot video with relatively short exposures (< 1 second say), rather than long exposure imagery. I think it would be possible to use stacking software to put together some reasonable images - e.g. night sky / landscape images, but taken from a different viewpoint to the more common ground-based pictures of the Milky Way above the horizon / tree-line.

As a thought-experiment into using AI for "lateral thinking", I thought it might prompt some new ideas for people looking for a new challenge - which it did, for me. Perhaps unsurprisingly - it turns out that it's not a totally original concept - a quick Google search for "drone-based astrophotography" shows people are already doing this:

https://www.youtube.com/watch?v=gvN5lnklDNs
https://fstoppers.com/aerial/drone-versus-dslr-astrophotography-can-mavic-2-take-down-nikon-352660

(Note that in the second link, the subject used a drone to get his camera above the fog bank, so he was able to capture some nice images on a night which would have kept most amateur astrophotographers inside.)

oska
01-02-2023, 10:55 PM
This "AI" is not more than fancy pattern matching https://www.youtube.com/watch?v=py5byOOHZM8
ie: these phrases are associated with these other phrases. The filtered phrases are passed on to another pattern matcher that makes reasonable sentences. Done.

From what I gather, without paying too much attention to it or indeed at the risk of being entirely mistaken, BlurX' uses a CNN (or something like it) to match a kernel (PSF) that approximates what the image was possibly blurred/convolved with by the seeing, optics etc and then deconvolves the image with that kernel. If that's true then there's no voodoo, obviously.

glend
02-02-2023, 01:08 AM
Is Astro AI not already in service, may be not the Open AI variety, but say the software that scans the skies looking at pixel movement to identify small near Earth objects that might represent a collision risk?

N1
02-02-2023, 07:24 AM
Well duh

AstroViking
02-02-2023, 07:59 AM
I've played with ChatGPT and am very unimpressed with it. The answers it gave were shallow and lacking any indication of understanding the concepts behind the question. They were also repetitious, with information regurgitated in different ways.

With regards to AI and astrophotography, I'd much rather struggle and battle my way manually through the process and achieve the satisfaction of creating a good image.

Cheers,
V

AdamJL
02-02-2023, 11:33 AM
I'm 3 years into this hobby now (but have been keeping an eye on it for about 12 years), and I hated that learning curve. I still do.
So much of this hobby is rooted in antiquated process and technology.

So I've been jumping at the change to try the New Things (tm) that come out. Whether that's in software (NoiseX, Starnett, StarX, BlurX, StarTools), hardware (ASI AIR, AM5), scopes (more petzvals please!) it's all good. Because the hobby has been slow to change, but these past few years when younger people have entered the market, we've seen a massive explosion in new ways of capturing images.

I guess it also depends on where you're coming from... I'm not a data scientist. I don't actually care if my image suddenly has 5 pixels extra of hydrogen filament created by software. The people I share these images with outside of the astro community don't care either. I'll leave the pixel peeping for NASA and astronomers and scientists who need that sort of accuracy.
But if you're the sort of person who wants data integrity as the first port of call, that's great. Sadly these AI tools are probably not for you. Thankfully the old processes are still there if you want them.

mldee
02-02-2023, 12:35 PM
I hit my 9th decade on this planet this year, and I'm with you Adam.

gary
02-02-2023, 11:48 PM
Sorry Steve :)

I couldn't resist. :lol:

gary
03-02-2023, 12:01 AM
Sorry John, :)

I couldn't resist. :lol:

oska
03-02-2023, 01:49 PM
Indeed. And buzzword compliant too :)

OzEclipse
03-02-2023, 02:17 PM
I have been working with Terry Cuttle on a series of articles about solar eclipses and the 5 Australian eclipses for the AAQ website.

I have mostly been doing a lot of proofing and editing of Terry's articles. Terry asked me to write a 3-page article on eclipse photography. Easy I thought. Well no as it turns out. I ran into what you might call "reverse writers' block," or perhaps, "writers' verbose diarrhoea."

My first draft was a whopping 17 pages. I viciously edited it down and I only got it down to 9 pages then I stalled and could cut no more.

So, I asked ChatGPT to write the article for me. So, I asked, "How do I photograph a total solar eclipse?"

Interesting exercise. It produced a succinct, fairly well-written, 1-page article. However, it was riddled with factual errors, omissions of critical info, and bad advice. It made an excellent starting point for me to correct the errors, omissions, and bad advice, and then turn it into the short article I needed to produce.

It basically told you to use a solar filter then gave instructions how to underexpose the partial eclipse by about 5 stops. Then it failed to tell you to remove the filter for totality, just to “check the live view and adjust the exposure settings accordingly.” Hmmm, with a solar filter left in place, you need a 16 minute exposure to record the corona.

I corrected the errors, fixed the most egregious omissions, then added some of my diagrams and a few extra bits of advice and voila, I had a 4-page article.

There is a lot of debate in the education sector about whether students will be able to get away with cheating using these tools. From what I have seen, the accuracy is about as accurate as a feature written by one of the writers on space.com. For a longer student assignment on a subject like history, if the error ratio is as bad as the drivel I got out of it, teachers have nothing to worry about. It is nonetheless a very useful tool in the right hands and used with suitably expert human oversight.

I am including a screen shot here of the original ChatGPT article draft with my comments for your amusement:

RB
03-02-2023, 06:44 PM
Did you write this reply yourself Joe?

:lol:

OzEclipse
03-02-2023, 08:42 PM
😂🤣😂🤣😂🤣

gary
04-02-2023, 12:28 AM
Just wanted to follow up to make it clear that this earlier description of how you might think ChatGPT works is incorrect.



On the contrary, where it describes itself as a "state-of-the-art language
model, trained on vast amounts of text and equipped with deep learning
algorithms", these are not a set of buzzwords but instead the
precise technical terms used by computer scientists and engineers to
characterize this type of system.

Deep learning language models such as ChatGPT are neural networks
using a probabilistic approach to derive the output sequence from the
input sequence.

By comparison, pattern matching systems in the way you describe are
relatively simple algorithms.

And the punctuation of 'Done' as a kind of QED belies the fact that
this system is vastly more complex and sophisticated than you might
appreciate. This system is not just large, how it works is non-trivial.

In fact the description you provided showcases one of the challenges for
training systems such as these. They reply on a large corpus of text for
their training and that overwhelming came from the web.

And of course most of the web is written by humans. So the old saying
of GIGO - garbage in, garbage out - is absolutely true. I am trying to
coin this in the politest way without offending, but the description you
provided might be a good showcase for garbage. If a language model
were to parse it, it would add to its number of incorrect facts. It is a tough
problem, hence ChatGPT was trained on sources that tended to have
a higher probability of being factually correct compared to say opinions
on a chat forum.

As an engineer and a member of the Institute of Electrical and Electronics Engineers
and its associated Computer Society for 45 years, I've watched AI systems evolve over time.

I've been posting on these systems in the Science section since June of
2021 and you can follow one of those threads there if you would like to
learn more. :-
https://www.iceinspace.com.au/forum/showthread.php?t=192039

glend
04-02-2023, 12:41 AM
Sydney based Absolutely AI, astro image wins photo competition, I kid you not.

https://www.news.com.au/technology/gadgets/cameras/man-or-machine-is-this-australias-most-controversial-photo/news-story/bfd17c03d661d7b1a41aba3e64205e31

The article makes some important points and is worth reading.

oska
04-02-2023, 04:18 AM
Garry, I said fancy pattern matching. Obviously that's not simple grep or even google's search thingy, that's just silly.

In its barest engineering essence without using any hype, buzzwords, overly technical terms or overstating anything, in as few words as possible what is it technically then?

What are "language models" and "deep learning algorithms" if not a bunch of weighted links on how phrases relate and fancy pattern matchers?

For your GIGO argument to be valid you would need to show the source being wrong in a statistically relevant manner ie: the obvious mistakes it makes with respect to context, relevance and accuracy being predominant in the training material that was used to train it, such that it "learned wrong". To me the "Garbage Out" nature of its responses specifically and in general demonstrate it being merely a fancy pattern matcher, a "next gen google" if you must, rather than anything that would reasonably be called even rudimentary AI. Sure, better training will help mitigate some of the short comings but it will not change what it actually is.

I think it's wildly fascinating, especially the black box nature of the resulting networks.

Paul Haese
04-02-2023, 09:55 AM
I have been involved in astronomy since I was 9 and I am now about to turn 59. In that time much has changed. More and better equipment available to amateurs. No need to make everything yourself anymore. It's gone from photography (using dark room techniques) to imaging (using software techniques). The learning is part of the fun and satisfaction. This hobby is not for the impatient!

Along the way there has been a constant learning curve. Whilst it can seem like it is now plug and play it's not in reality. You will need to have a mechanical aptitude to setup, service and repair equipment. You still need to problem solve on occasions. Processing data still requires an element of skill and understanding of the science of Astronomy. Processing is about 50% of the exercise. If you think you just plug it in, walk away and then see the finished result later, you're dreaming.

As to AI sharpening, noise reduction etc I don't agree that putting garbage in you get silk out. You need good weather conditions, good setups, good guiding etc. I see AI deconvolution and noise reduction as tools only. Tools in an already very large tool box. I like many experienced astrophotographers use several programmes when processing. Some elements of processing just require the use of photoshop every single time. A few tedious aspects need automation. AI can't give me altitude based images, but it can improve the point spread function of the data and reduce the noise levels a little, but I still need to take a lot of data, it still has to be good data. That though does not make great images. There is a lot to do in processing aside from these tools. There is still a strong element of artistic leaning and understanding chemistry and knowing what looks natural when processing. There is still the need to sit and do the work to produce the results. I imagine this will be the case for some time. When it becomes purely automated, then what will be the point of doing it. The truly lazy will love it I imagine but those of us that love the maker space will still doing things the "hard" way.

g__day
04-02-2023, 11:57 AM
We are at an interesting moment in time - where a small freeware version of an AI has caused massive stir and interest in A.I. in the public domain. I highlight small freeware version in comparison to what large Governments, Military, Mega corporations and well funded intelligence agencies can use at their discretion. One wonders if ChatGPT is a mere demonstrative toy compared to their capabilities.

I do see the potential this has to reshape modern life - in many aspects. A human being will be defended in USA for the first time this month by an A.I. Its ability to code and improve code vastly outstrips many human developers. It has already passed (bottom 20% of passes) the final law exam for Harvard law school graduates and it is growing more sophisticated at an accelerating rate.

Whilst it isn't Skynet - one wonders what will happen when AI's start feeding themselves and designing better and better AIs - the way organic creatures evolve. It isn't aware software with its own intelligence - it is meant to replicate behaviour or responses in a way to statistically acting in accordance as if it were. So outside its black box it appears frighteningly capable, inside the box its data and algorithms that are whirling around executing their models - and this will likely continue until someone tells the world they have invented a quantum level AI.

I have no way of knowing where this will all lead - the same way I have never seen a zombie apocalypse or end of the world movie were we all ran out of toilet paper - but hey who during Covid foresaw that?

I do foresee a time and real soon when made by AI surpasses any made by human hand or skill - and it actively gets marketed as such. I sense it will be a rather wild ride ahead. I wonder how far we are from the day you can ask an AI "What could you design and built that would supplant and replace you with something far more advanced?" and we get to see what an AI thinks will make AI itself obsolete...

glend
05-02-2023, 08:20 AM
My son and I were discussing AI going forward, and the big leap that GDP-4 will bring. Imagine a fleet of AI enabled Starships leaving for Mars, onboard are a troop of Boston Dynamic AI robots, tasked with assembling the Boring machine, 3D habitats printers etc. These hive minded robots running around on the surface, busy with their tasks, with no risk to any human. They return to their charging station when required. Sounds like the Borg to me.:help:

Boozlefoot
05-02-2023, 03:06 PM
What is another word for artificial?

unreal.
bogus.
counterfeit.
ersatz.
fabricated.
factitious.
faked.
false.

AI - An existential risk to humanity if its rational capacities are not steered towards beneficial goals.

Yes, as a visual astronomer, I wear the Luddite mantle with pride, and reflect with considerable dismay the forthcoming possible lack of ability in all the basics by coming generations.

Maybe I'm just a cranky old man who was caned at school after being caught with a Texas Instruments calculator.

RB
05-02-2023, 03:15 PM
That's nothing, the whole class got caned for just being in the same room with the guy that had a TI calculator.

:lol:

xelasnave
05-02-2023, 05:44 PM
Only a cane? Heck we got the cat of nine tails with two extra tails if we were caught with an abacus.

Control by fear of hurt is just wrong.

And thinking back what miserable people were those who bullied children with the cane.


AI is actually real as evidenced by the fact it is being discussed as well as other things establishing it as very real indeed.

Hardly bogus but perhaps you could describe how that could be.

Counterfeit involves a copy passed off as the original and clearly this has not happened.

Ersatz how exactly..meanwhike I will look that word up...

Fabricated..well thank goodness as conjured out of thin air would be of great concern.

Faked..then there is nothing to worry about as the claims will come to nothing.

False..again if false there is nothing to worry about.

A list of words is hardley a reasonable substitution for a well reasoned arguement and smacks of pulling on the emotions without any basis for the general misgivings.

AI is good ( let's start from there) and the negative responses come mainly from folk who will even feel threatened in the presence of a human who has a better education and a wider life experience...

There is nothing to be proud of being a luddite... these were a group of people who damaged property in a silly attempt to turn back the clock...shameful read about what took place.

Moreover why can't folk see just how these things will eventually move the human race forward because all this says to me is we are acquiring better tools to manage...tools are what have made us great ... I can just hear some old cave man growling when his son showed him the first flint spears head.." no no no this will ruin everything etc etc"

Even a casual understanding of evolution makes one realise at least one important characteristic of a species surviving is it's ability to manage change... so why not look at the the benefits and look forward to when you will be a beneficiary.

Can't we as old men we set an example to those who come behind us that seeing the best in things before us it the way to be, that making praise our goto is better than being negative....I have found in general people like to be in the company of someone who is positive and hopeful so I would think it is a sensible choice....still no statues of a critic that I know of...



Alex

Hans Tucker
05-02-2023, 05:58 PM
Going with the insults again rather than debating the point. Just because a person that has reservations about AI doesn't mean they are less educated or lack life experiences. The reality is AI is here whether it is good or bad time will tell.

xelasnave
05-02-2023, 06:26 PM
I was not attempting to insult so I appologise for the way I put that....just here alone desperately craving to be heard... actually I was trying to be humorous which was silly because that never works...anyways just because I say what I said about education and experience does not mean anyone here has to identify as that... most of all Hans I am sorry that you find me so annoying...well I am not sorry I suppose but it seemed like a nice thing to say...I can live with the fact many folk think the same way about me as you do..Perhaps I should care but I don't folk can think what they like which was really the point you were making I guess.

I find all the negative talk that surfaces here tiresome and that's my excuse for my bad behaviour...

Alex

gary
05-02-2023, 08:19 PM
Thanks Steve :)

Look I know where you are coming from and I'm following up here not
to point score, but rather to have a stab at trying to explain why I would
not refer to ChatGPT as even the fanciest of pattern matchers.

My hope is that for any interested reader who takes the time to read this
overly long response that they may come away with some
appreciation that the way ChatGPT operates is significantly different in
many key ways to the convolutional neural networks that are presented
in the video.

And for the record, all the following text is mine as tempting as it may
be to just get ChatGPT to do it. Thus any factual errors, terrible grammar
are purely mine. :lol:

I note that video was made in 2016 which at the current pace is very old.
It predates a seminal paper that came out of Google in 2017 that was
a game changer that made large language models like GPT-2/GPT-3 and
ChatGPT practically possible.

One feature of something that is purely a pattern matcher is that if you
gave it certain specific inputs, it would provide an identical result
each time. The convolutional neural network when processing images is
precisely that type of system.

If you present identical text to ChatGPT in new chat sessions, there is no
guarantee that the output will be identical.

In fact if you were to build an exact clone of the hardware of ChatGPT and
gave both systems the same input conversation, there is no guarantee that
the outputs would be the same.

I will touch upon why that is a little later.

You mention the word "phrases" here and in your first post where you offered an
explanation that, quoting "these phrases are associated with these other
phrases. The filtered phrases are passed on to another pattern matcher that makes
reasonable sentences. Done.".

That is incorrect because during real-time operation, remarkably ChatGPT
doesn't deal with phrases at all.

To understand why that is, it is useful to understand the size of the problem
if it could directly deal with phrases.

If you go back to the 2016 video link you posted, the presenter gives
a hypothetical example of a neural network that might be used to
estimate the price of a house given a set of input parameters it derives
from an image - number of windows, width and height of building and so
on. (2m33s in the video). In actual fact, he goes on to say that's not how
he's image processing system works he but does propose to the
viewer that might be some "smart way of doing it".

Now consider the problem of natural language processing. If one were
to use the phrase approach there are essentially some impractically
enormous number of inputs to deal with because there are essentially an
infinite number of input phrases.

So what does ChatGPT do? Instead of dealing with phrases, it firstly deals
with a word at a time and uses probabilities as to what word is most likely
to come next.

So far, so good. But if you build a system that just relies on what word
is most statistically likely to come next, it tends to quickly drift off topic
on some confusing tangent.

So you then think, "Okay, for it to stay on topic, each time I ask a new
question in a dialog on a subject, I will simply re-parse the entire
conversation up to and including the new question and so it will more
likely stay on topic".

The problem with that approach is that with each new input dialog from
the user and each new output dialog from ChatGPT, if it were to process
the entire conversation again through the neural network, it rapidly exceeds
its computational ability. There is just too much data.

So what do you do? You need to keep focus on the conversation by
distilling the whole conversation up to a given point into a smaller form
of data. Something small enough that you store in memory
which can be thought of as an abstract vector summarizing the
information from the conversation so far.

The hidden state is updated at each time step based on the input (i.e. the
message from the user) and the previous hidden state, allowing the model
to keep track of the context of the conversation and make informed
predictions about the next message.

These types of neural networks are known as Recurrent Neural Networks
(RNNs).

Unlike the convolutional network in the video, the RNN
ChatGPT employs is 'stateful'. Things don't simply pass in and
"filter through" a series of filters or pattern matchers. The hidden memory
component feeding back as one of the inputs makes the system a state machine.

Now RNN's have been around a long time, since the 1980's.
But a key feature is, how do you decide what to retain in the hidden
state and how do yo do it? You only have limited memory and limited
processing capability.

So what to keep in this hidden state vector and what to throw away is
the tough part. It is the part that at any instant is trying to remember
the things that are important to keep the conversation on track.

Does this sound familiar? Probably, because it feels eerily similar to what
we do as humans when having a conversation with someone or reading
a novel.

As the conversation with another person progresses, we don't record
the entire conversation in our heads but instead pay attention to the
details we are talking about. In a similar way, if we are half way through
a thick novel and open it up again, we don't run through every word in
the book in our minds up to that point. We have some distillation of the
plot and what the characters were up to so when we start reading again
where we left off last time, they form inputs into the neural network in
our brains combined with the new words in the book so it makes some
sense to us.

Sure, we do pattern matching when we see the printed words on the
page, for example, it might say, "She turned on me with real fury
as though I were a child who had carelessly broken some vase she
had cherished over the the years for its beauty and the memories it
contained". We do the pattern matching to pick out the space
delimted words "She", "child", "vase" and so on, but for an abstract
phrase like this which we are unlikely to have seen before, we don't
do any fancy phrase pattern matching. Nor for that matter does
ChatGPT.

What we do do is retain some distillation of the novel
up to that point so we know who "she" is and who the person telling
the story is and as we parse the sentence we are processing it serially,
a word at a time and maintaining "attention" on what is important. In
this case, "She" (who we know from previous) is furious with the person
telling the story (who we also know from previous).

ChatGPT, like us, does the same during its conversation.

Now the really, really tricky part is in this small distillation we keep in
our heads of a conversation or up to that point we left off in
a novel is deciding what is important to retain and what is not.

When processing language, we look for what is important. "She", "me",
"fury". That's what we want to keep in the short term memory at least
for the next paragraph to make sense. Who "she" is and why she is furious
will have already been distilled in our compacted memory of the book.

That this is probably the way we process language as humans is often
demonstrated when we are having a conversation, become distracted
and then say, "What were we talking about again?" Our state vector
of the conversation to date requires a refresh. Sometimes both parties
can't recollect what they were talking about.

Now the tough part is knowing what to retain in that distillation
of the conversation. What to keep and what to throw away.
Over the years with RNN's there were several approaches.
Algorithms with names such as Long Short-Term Memory (LSTM) cells
and Gated Recurrent Units (GRUs). These are known as "attention"
mechanisms used to mimic cognitive attention. How we stay focused
when processing language.

Then in 2017, a year after that YouTube video, a breakthrough.
A paper entitled "Attention Is All You Need" by Vaswani et. al.
It proposed a brand new attention mechanism called the "Transformer".
This algorithm not only does a much better job of a RNN staying
on track during a conversation, it is computationally efficient and
parallelizable making it run faster.

The key innovation of the transformer architecture is the use of
"self-attention mechanisms", which allow the network to weigh the
importance of different parts of the input sequence when making
predictions. The self-attention mechanism allows the network to focus on
the most relevant parts of the input when making predictions, rather than
simply processing the entire sequence in a fixed order as in traditional
RNNs or convolutional neural networks.

So GPT-2/GPT-3/ChatGPT are examples of language models using
the "Transformer" algorithm to refresh their hidden state vector.

It gets more complex than that. ChatGPT also uses random number
generators to mix things up. Hence even if you built an identical clone,
it is likely to have a different worded conversation.

So ChatGPT does not pattern match on phrases but instead
uses a RNN that has an internal feedback loop that is stateful, which
in turn is refreshed by a "Transformer" algorithm to maintain
attention.

Mechanisms such as "attention" are key to language models and
hence the term "language model" is not a buzzword substitute
for a "fancy pattern matcher". They are two entirely different
concepts.

In fact it is not even a really fancy pattern matcher any more than
a computer is a fancy typewriter.

With a typewriter, you press the Q key and an Q is printed and so on.
Deterministic. Place finger here, recognizes what key is pressed and
predicatively the same letter is printed.

By comparison, the old Enigma machine you would press a Q and
depending on the settings, some other letter would come out,
say K. It's gears would turn and you press Q again and some other
letter might come out, say 'B'.

The statefullness of CHatGPT, its ability to effectively change its
internal state on the fly, the addition of a random number generator
make it like an Enigma machine on steroids. Billions of times bigger.
No fancy matching of phrases at all.

Let's not beat about the bush. ChatGPT is certainly the most impressive
demonstration of software of any type I have ever seen.

Attached, over two files is a conversation I had with GPT a short while
ago which demonstrates its "attention" mechanism by way of the
"Transformer" algorithm.

I only mention the word "penguin" once at the start of the conversation.
Despite the fact I do not use the world "penguin" again but instead ask
questions such as "What do they eat?" and "How do they withstand the
cold?", notice how ChatGPT understands we are talking about penguins
and say, for example, not the "someone" who suggested a trip to the zoo.

gary
05-02-2023, 09:53 PM
Hi Julian,

Great to hear you have been playing with ChatGPT.

To help set your level of expectation, let me first of all make clear what
ChatGPT isn't.

It isn't sentient and though it often demonstrates remarkable output,
including original stories, the answers to an astonishing number of facts
and so on, it is not designed to be capable of "lateral thinking" in the
sense you and I know.

So what is it? Well, it is a sneak preview of a research language model.
It happens to have been trained on an enormous amount of text, but right
now it isn't as important as to what it knows as to the fact it can have
an extended conversation at all.

You might remember the old TV show, Mr. Ed, the talking horse.
Mr. Ed was smart but would only ever talk to his owner, Wilbur.

Now imagine one day a neighbour calls out to you. "Psst, Julian. Get
over here. I have this horse that can talk."

You say, "Wow. That's incredible! Can I talk to him?"

And the neighbour says, "Sure, come over here and sit in the barn and
why don't you have a chat with him? He knows a lot of stuff. He would
do really well on a quiz show like Jeopardy. Knows capital cities, history
of nations, cellular biology. Well, in chatting to him, he seems to know an
awful lot more facts than me."

So you sit down in the barn and chat all day with him.

You come out of the barn and the neighbour says, "Well, what do think?
Incredible eh?" And you respond, "He couldn't give me a good lateral
thinking approach to astrophotography".

But then the neighbour says, "But he's a talking horse ..." :sadeyes:

It's easy to forget what the current release of ChatGPT is attempting
to demonstrate, which is that you are actually having a conversation
with a talking computer :)

glend
05-02-2023, 11:26 PM
Microsoft has just announced that their search engine Bing, will have ChatGPT-4 shortly, and it will be online on the internet, no archived dataset.
Details here:


https://medium.com/@owenyin/scoop-oh-the-things-youll-do-with-bing-s-chatgpt-62b42d8d7198

glend
06-02-2023, 01:18 AM
Seinfeld (continuous) as imagined by GPT3.

https://amp.theguardian.com/tv-and-radio/2023/feb/04/ai-seinfeld-nothing-forever-twitch

AdamJL
06-02-2023, 09:35 AM
hopefully something to blunt the power and pervasiveness of Google.

And sadly, I don't think we can sign up here in Australia, but I'll definitely be doing so as soon as I can.

AstroViking
06-02-2023, 09:37 AM
FWIW, I work in a technical field (very heavily IT) where I consider a wide range of knowledge to be essential.

The newer members of the team have, to my mind, a sad lack of what I would consider to be basic skills for the field. For example - the newest member of our team is studying for a PhD in the field, but knows nothing about networking.

gary
06-02-2023, 01:03 PM
Hi Joe,

Here's a tip.

Rather than ask ChatGPT to write a one page original work on some very
technical esoteric subject, you will sometimes get a better result by
prompting it with, "Shorten the following report to a one page quick
technical how-to guide" and then copying and pasting what you have
written after that.

Now at 9 pages it is very likely that it will exceed the maximum number
of input tokens (essentially characters), but if your document was split
into multiple sections, you might be able to feed in a bit at a time.

Just like you yourself went from a draft of 17 pages down to 9 but balked
at being able to get it below that, it probably means your
essential facts to verbiage ratio was nearing unity and so ChatGPT may
do no better with a result missing important stuff.

Let's face it. Say you are sitting on a plane flying to West Australia
and some stranger asks if you are going on a holiday. You tell them
you are going to photograph a solar eclipse. They then ask, "Do you
need to do anything special to photograph them? How do you go
about that?" So you passionately launch into the full spiel including all
your hard won tips to "open up to f5.8-f8 where lens MTF is highest and
resolution sharpest" and so on.

Meanwhile, if you take notice, the stranger's eyes are starting to glaze
over.

When they asked, they were thinking, "Can I somehow snap this
with my phone?" :lol:

The reason I mention this is yesterday I wrote an overly long
post attempting to explain what is called the attention mechanism
within ChatGPT that attempts to keep it on topic.

A bit like the entry for "Earth" in the Hitchhikers Guide To the Galaxy
that originally read "Harmless" and was then revised to "Mostly harmless",
I guess it comes down to your target audience.

Believe me. Anyone who is serious enough to travel half way around the
world or cross a continent to get a great image of a solar eclipse will
not hesitate to read 17 pages. So my best tip of all is to keep it like
you have it. Besides, they have plenty of time to read it on the plane. :)

That hypothetical stranger on the plane was looking for totally
different advice. :)

Though trained on a large set of text, the current purpose of ChatGPT
was not to be the oracle of all knowledge but a demonstration of a language
model.

The other day I watched a builder using the lithium battery
out of his portable drill to hammer in a nail.

I mention it of course as reference to the old adage of the right tool
for the right job.

So hence as a language model ChatGPT is probably better at word crafting
what you as an expert had written than it is at hammering out an original
document of a technical nature. One day, maybe, but not now. :)

By the way, when I read the acronym you wrote of MTF, I had no idea
want it meant.

Having said it is not an oracle of all knowledge, it is amazing what
ChatGPT does know.

I asked ChatGPT :-

In the following sentence :-

"open up to f5.8-f8 where lens MTF is highest and resolution sharpest"

What does MTF stand for?



What does it indicate?



Does it use a Fourier transform?



Is it plotted in the complex plane?



So would something like a point spread function be utilized with the MTF Fourier transform?



So in a sense the lens is acting as a bandpass filter at certain frequencies and this is less than ideal for uniform contrast?



Where a lens is selectively passing certain frequencies of light, fundamentally what is the physics of how that takes place? Is it at an atomic level?



So ideally a perfect lens would allow 100% of light to pass through it at all optical frequencies and hence essentially be a unit transfer? Ideally a flat, linear response?





Well, you can learn something new every day. Good luck with the eclipse. :thumbsup:

gary
06-02-2023, 01:28 PM
As a demonstration of how ChatGPT can be used as a tool to do
word-smithing, I fed it my overly long post from yesterday and asked
it to write a 500 word version.

This summary omitted, for example, the reference to the seminal
"Attention Is All You Need" paper, the "Transformer" algorithm and so on.

But this is perhaps an example of where you can use it as a tool at work.
Your co-workers are after all the details, like references to papers.
The Pointy Haired Boss meanwhile might have requested a summary,
so it can help save you time from having to word-smith it down.

oska
07-02-2023, 04:15 PM
https://www.zdnet.com/article/chatgpt-is-not-particularly-innovative-and-nothing-revolutionary-says-metas-chief-ai-scientist/

Probably just a blowhard though :lol:

gary
07-02-2023, 09:49 PM
In a press release today, Google is introducing Bard, an experimental
conversational AI service using a large language model.

Google says Bard seeks to provide fresh, high-quality responses based on
information from the web.

Bard is powered by LaMDA, Google's next-generation language and
conversation technology, and is currently being opened up to trusted
testers before it becomes more widely available to the public in the
near future.

Google says it re-oriented the company around AI six years ago as it sees
AI as the most important way to deliver on its mission to make information
universally accessible and useful.

Press release here :-
https://blog.google/technology/ai/bard-google-ai-search-updates/

Hans Tucker
09-02-2023, 06:50 AM
Interesting. Granted the source is a tabloid so facts are questionable, but the story raises an interesting issue.

https://www.dailymail.co.uk/news/article-11726269/Graduate-uses-ChatGPT-write-university-essay-gets-passing-grade.html

Hemi
09-02-2023, 09:48 AM
I’ve been away for a while with other work/life commitments (and terribly demotivating weather), but this thread has been a great (but very long) read.

I went to a Surgical conference a few months ago entirely dedicated to AI in Surgery.I was fascinatied and amazed to see the sophisticated and novel uses of this tech in medicine already, in what is still a very immature technology/science.

The one thing that stuck with me, and very obvious in hindsight, is that virtually all of this still has the same biases (ie jumping to possibly inappropriate conclusions) as the society which created the tech.

The answers that come out in whatever application (or chat bot), will have the same gender, age, race, religious biases that are prevalent in society.

Interesting…

I’m with Adam though, I’m very excited by the whole thing. What an amazing time to be living in this simulation.

gary
09-02-2023, 10:56 AM
You can get it to write your school essays or there is also plan B. :lol:

multiweb
10-02-2023, 06:09 PM
That looks interesting.

https://github.blog/2022-06-21-github-copilot-is-generally-available-to-all-developers/

AdamJL
10-02-2023, 06:24 PM
Certainly does, Marc! I wonder if it will make coding easier.

gary
11-02-2023, 11:30 AM
Hi Marc,

AI assisted code writing is absolutely going to become a hundred billion dollar market in the years ahead.

However, GitHub's Copilot is already controversial when it was commercialized.

Last year a copyright lawsuit was filed against them.

Claim is that Copilot was trained on some code that was copyrighted under
various open source licenses. So for example if one those licences had
been the restrictive greater GPL, the question in law becomes is the
outputted code from Copilot tainted with GPL? If you then use that
code as part of your company's closed source software, have you
potentially tainted that as well?

Stories and opinion pieces here :-
https://www.theregister.com/2022/11/11/githubs_copilot_opinion/

https://felixreda.eu/2021/07/github-copilot-is-not-infringing-your-copyright/

https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data


Now where it became really controversial is in December 2022 when
Tim Davis, a computer science professor at Texas A&M University, found
he could get Copilot to emit large chunks of his copyrighted code that it had
slurped up in the training set, but missing all of the attribution and
license comments.

https://techcrunch.com/2022/12/08/github-launches-copilot-for-business-plan-as-legal-questions-remain-unresolved/



Screenshot image below.

Now any professional software engineer called upon to give expert
testimony is going to put their hand on a copy of K&R and swear to tell the
truth and state that in their professional opinion, that's extremely likely to be
a lift of Davis's code with trivial changes.

raymo
11-02-2023, 12:59 PM
I wonder where the original thread went.
raymo

gary
11-02-2023, 01:50 PM
Hi Raymo,





I think it is important to instill into the mind's of interested readers
exactly where the state-of-the-art is at, otherwise the levels of
expectation they may set for themselves may exceed the actual
capabilities of this current run of tools.

The old adage, "Right tool for the right job", rings true.

Even one of the biggest corporations spectacularly screwed this up the
other day with Google's Twitter feed of a demo of their response
to ChatGPT, a language model named Bard.

When asked, "What new discoveries from the James Webb Space
Telescope can I tell my 9 year old about?", Bard responded with several
bullet points including the incorrect factoid that it took the very first
images of exoplanets.

In that instant, the share price of Google's parent company, Alphabet
plummeted and USD 100 billion in the company's market value was
lost, just like that.

In a rush to get it out there that Google was in the game too, the
corporation no doubt made that age old mistake. They left it to marketing. :lol:

Now a better, more factual announcement would have been to explain what
Bard is. An interactive chatbot.

Then using the best phrasing one could muster, put it to the market,
"Well we all know how great Google is. And we all know that Google
itself doesn't write the content. There is some great stuff on the web.
Then there is the stuff that is wrong. Wading through that can be quite
a challenge. Though our new ChatBot is not an oracle of knowledge and
it will say things that are incorrect, it can provide you with another valuable
tool to perform a search with".

With uncurated or only partially curated content, the desirable goal of
having systems that are factually correct every time is a highly desirable goal.
For the time being, the great achievement is that we have created machines that
are highly conversant in written language.

ReidG
13-02-2023, 10:47 AM
AI like ChatGPT shows great potential but it has some problems at this early stage.


MS Bing ventured into the future to report on the US SuperBowl before the event. it was not asked for a prediction.
Watch for the results and see what really happened.



https://apnews.com/article/kansas-city-chiefs-philadelphia-eagles-technology-science-82bc20f207e3e4cf81abc6a5d9e6b23a

LewisM
13-02-2023, 08:30 PM
There will always be those who persevere with the old, for personal satisfaction and just because.

To paraphrase John F. Kennedy : "We choose [these things]...not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too."

AI and so on serve immeasurable purposes, yet so do time honoured skills and techniques. The human touch that AI will never have no matter how hard proponents claim "AI made by humans = human", whereas in reality ne'er the 'twain shall meet.

There will also always be show ponies and grandiose chest beaters demanding of public attention and the showiest bling one can imagine - and sometimes it is imagined - and there will be the meek and humble doing things for their own enjoyment and gratification. And these two distinct parties will also choose a methodology, and both shall be proponents thereof, and therein lies the joy of humanity - the right to choose, the right to self expression, and the right to love or loathe - to have opinion and critique. There is no right or wrong.

Am I a fan of the modern point and click insta-APOD or Facebook-Malin? No, but let's let others chose that path should they wish. The old ways will not die - we see that in revived/reviving and continuing age old traditions via the master craftsmen.

Evolve and grow, expand, but always remember the past.

EpickCrom
13-02-2023, 09:04 PM
AI won't have any effect on us die hard visual observers.. unless they come up with an AI eyepiece oh:rofl:

LewisM
13-02-2023, 09:04 PM
Don't tempt fate :P

RB
13-02-2023, 09:31 PM
This was written by Ai.



This was written by Lewis.

:lol:

LewisM
13-02-2023, 11:45 PM
Careful now, I'll send my minions....

RB
14-02-2023, 12:30 AM
:lol:

multiweb
14-02-2023, 07:34 AM
:lol::lol::lol:

glend
14-02-2023, 08:05 AM
They already exist in the form of of EAA.

EpickCrom
14-02-2023, 08:32 AM
Ahh EAA, I forgot about that. That is something I will definitely try one day..

ReidG
15-02-2023, 03:19 PM
AI for search may be just a tad premature.
It will most likely get sorted but for now it is not quite ready.


Whoever from marketing got ahead of the workers again.


https://edition.cnn.com/2023/02/14/tech/microsoft-bing-ai-errors/index.html

AdamJL
03-03-2023, 06:01 PM
I've been having so much fun with Bing's AI. I made a post about one interaction here:
https://www.iceinspace.com.au/forum/showthread.php?p=1583955#post158395 5

Today I asked it for some details on step size for the Esatto 3" focuser...

Here's the brief chat.
Only thing is I have no idea if this one is right or not :rofl:

AdamJL
03-03-2023, 06:07 PM
^that's the best part about Bing's integration with AI. It's backed by search data.

AdamJL
06-03-2023, 05:52 PM
Still having fun with Bing. I'll get back to telescope questions soon!
Managed to get it to strike out this time.