Go Back   IceInSpace > General Astronomy > General Chat
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Rate Thread
  #41  
Old 05-02-2023, 08:19 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
Quote:
Originally Posted by oska View Post
Garry, I said fancy pattern matching. Obviously that's not simple grep or even google's search thingy, that's just silly.

In its barest engineering essence without using any hype, buzzwords, overly technical terms or overstating anything, in as few words as possible what is it technically then?

What are "language models" and "deep learning algorithms" if not a bunch of weighted links on how phrases relate and fancy pattern matchers?
Thanks Steve

Look I know where you are coming from and I'm following up here not
to point score, but rather to have a stab at trying to explain why I would
not refer to ChatGPT as even the fanciest of pattern matchers.

My hope is that for any interested reader who takes the time to read this
overly long response that they may come away with some
appreciation that the way ChatGPT operates is significantly different in
many key ways to the convolutional neural networks that are presented
in the video.

And for the record, all the following text is mine as tempting as it may
be to just get ChatGPT to do it. Thus any factual errors, terrible grammar
are purely mine.

I note that video was made in 2016 which at the current pace is very old.
It predates a seminal paper that came out of Google in 2017 that was
a game changer that made large language models like GPT-2/GPT-3 and
ChatGPT practically possible.

One feature of something that is purely a pattern matcher is that if you
gave it certain specific inputs, it would provide an identical result
each time. The convolutional neural network when processing images is
precisely that type of system.

If you present identical text to ChatGPT in new chat sessions, there is no
guarantee that the output will be identical.

In fact if you were to build an exact clone of the hardware of ChatGPT and
gave both systems the same input conversation, there is no guarantee that
the outputs would be the same.

I will touch upon why that is a little later.

You mention the word "phrases" here and in your first post where you offered an
explanation that, quoting "these phrases are associated with these other
phrases. The filtered phrases are passed on to another pattern matcher that makes
reasonable sentences. Done.".

That is incorrect because during real-time operation, remarkably ChatGPT
doesn't deal with phrases at all.

To understand why that is, it is useful to understand the size of the problem
if it could directly deal with phrases.

If you go back to the 2016 video link you posted, the presenter gives
a hypothetical example of a neural network that might be used to
estimate the price of a house given a set of input parameters it derives
from an image - number of windows, width and height of building and so
on. (2m33s in the video). In actual fact, he goes on to say that's not how
he's image processing system works he but does propose to the
viewer that might be some "smart way of doing it".

Now consider the problem of natural language processing. If one were
to use the phrase approach there are essentially some impractically
enormous number of inputs to deal with because there are essentially an
infinite number of input phrases.

So what does ChatGPT do? Instead of dealing with phrases, it firstly deals
with a word at a time and uses probabilities as to what word is most likely
to come next.

So far, so good. But if you build a system that just relies on what word
is most statistically likely to come next, it tends to quickly drift off topic
on some confusing tangent.

So you then think, "Okay, for it to stay on topic, each time I ask a new
question in a dialog on a subject, I will simply re-parse the entire
conversation up to and including the new question and so it will more
likely stay on topic".

The problem with that approach is that with each new input dialog from
the user and each new output dialog from ChatGPT, if it were to process
the entire conversation again through the neural network, it rapidly exceeds
its computational ability. There is just too much data.

So what do you do? You need to keep focus on the conversation by
distilling the whole conversation up to a given point into a smaller form
of data. Something small enough that you store in memory
which can be thought of as an abstract vector summarizing the
information from the conversation so far.

The hidden state is updated at each time step based on the input (i.e. the
message from the user) and the previous hidden state, allowing the model
to keep track of the context of the conversation and make informed
predictions about the next message.

These types of neural networks are known as Recurrent Neural Networks
(RNNs).

Unlike the convolutional network in the video, the RNN
ChatGPT employs is 'stateful'. Things don't simply pass in and
"filter through" a series of filters or pattern matchers. The hidden memory
component feeding back as one of the inputs makes the system a state machine.

Now RNN's have been around a long time, since the 1980's.
But a key feature is, how do you decide what to retain in the hidden
state and how do yo do it? You only have limited memory and limited
processing capability.

So what to keep in this hidden state vector and what to throw away is
the tough part. It is the part that at any instant is trying to remember
the things that are important to keep the conversation on track.

Does this sound familiar? Probably, because it feels eerily similar to what
we do as humans when having a conversation with someone or reading
a novel.

As the conversation with another person progresses, we don't record
the entire conversation in our heads but instead pay attention to the
details we are talking about. In a similar way, if we are half way through
a thick novel and open it up again, we don't run through every word in
the book in our minds up to that point. We have some distillation of the
plot and what the characters were up to so when we start reading again
where we left off last time, they form inputs into the neural network in
our brains combined with the new words in the book so it makes some
sense to us.

Sure, we do pattern matching when we see the printed words on the
page, for example, it might say, "She turned on me with real fury
as though I were a child who had carelessly broken some vase she
had cherished over the the years for its beauty and the memories it
contained". We do the pattern matching to pick out the space
delimted words "She", "child", "vase" and so on, but for an abstract
phrase like this which we are unlikely to have seen before, we don't
do any fancy phrase pattern matching. Nor for that matter does
ChatGPT.

What we do do is retain some distillation of the novel
up to that point so we know who "she" is and who the person telling
the story is and as we parse the sentence we are processing it serially,
a word at a time and maintaining "attention" on what is important. In
this case, "She" (who we know from previous) is furious with the person
telling the story (who we also know from previous).

ChatGPT, like us, does the same during its conversation.

Now the really, really tricky part is in this small distillation we keep in
our heads of a conversation or up to that point we left off in
a novel is deciding what is important to retain and what is not.

When processing language, we look for what is important. "She", "me",
"fury". That's what we want to keep in the short term memory at least
for the next paragraph to make sense. Who "she" is and why she is furious
will have already been distilled in our compacted memory of the book.

That this is probably the way we process language as humans is often
demonstrated when we are having a conversation, become distracted
and then say, "What were we talking about again?" Our state vector
of the conversation to date requires a refresh. Sometimes both parties
can't recollect what they were talking about.

Now the tough part is knowing what to retain in that distillation
of the conversation. What to keep and what to throw away.
Over the years with RNN's there were several approaches.
Algorithms with names such as Long Short-Term Memory (LSTM) cells
and Gated Recurrent Units (GRUs). These are known as "attention"
mechanisms used to mimic cognitive attention. How we stay focused
when processing language.

Then in 2017, a year after that YouTube video, a breakthrough.
A paper entitled "Attention Is All You Need" by Vaswani et. al.
It proposed a brand new attention mechanism called the "Transformer".
This algorithm not only does a much better job of a RNN staying
on track during a conversation, it is computationally efficient and
parallelizable making it run faster.

The key innovation of the transformer architecture is the use of
"self-attention mechanisms", which allow the network to weigh the
importance of different parts of the input sequence when making
predictions. The self-attention mechanism allows the network to focus on
the most relevant parts of the input when making predictions, rather than
simply processing the entire sequence in a fixed order as in traditional
RNNs or convolutional neural networks.

So GPT-2/GPT-3/ChatGPT are examples of language models using
the "Transformer" algorithm to refresh their hidden state vector.

It gets more complex than that. ChatGPT also uses random number
generators to mix things up. Hence even if you built an identical clone,
it is likely to have a different worded conversation.

So ChatGPT does not pattern match on phrases but instead
uses a RNN that has an internal feedback loop that is stateful, which
in turn is refreshed by a "Transformer" algorithm to maintain
attention.

Mechanisms such as "attention" are key to language models and
hence the term "language model" is not a buzzword substitute
for a "fancy pattern matcher". They are two entirely different
concepts.

In fact it is not even a really fancy pattern matcher any more than
a computer is a fancy typewriter.

With a typewriter, you press the Q key and an Q is printed and so on.
Deterministic. Place finger here, recognizes what key is pressed and
predicatively the same letter is printed.

By comparison, the old Enigma machine you would press a Q and
depending on the settings, some other letter would come out,
say K. It's gears would turn and you press Q again and some other
letter might come out, say 'B'.

The statefullness of CHatGPT, its ability to effectively change its
internal state on the fly, the addition of a random number generator
make it like an Enigma machine on steroids. Billions of times bigger.
No fancy matching of phrases at all.

Let's not beat about the bush. ChatGPT is certainly the most impressive
demonstration of software of any type I have ever seen.

Attached, over two files is a conversation I had with GPT a short while
ago which demonstrates its "attention" mechanism by way of the
"Transformer" algorithm.

I only mention the word "penguin" once at the start of the conversation.
Despite the fact I do not use the world "penguin" again but instead ask
questions such as "What do they eat?" and "How do they withstand the
cold?", notice how ChatGPT understands we are talking about penguins
and say, for example, not the "someone" who suggested a trip to the zoo.
Attached Thumbnails
Click for full-size image (penguin1.jpg)
68.5 KB65 views
Click for full-size image (penguin2.jpg)
55.0 KB39 views

Last edited by gary; 05-02-2023 at 09:21 PM.
Reply With Quote
  #42  
Old 05-02-2023, 09:53 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
Quote:
Originally Posted by julianh72 View Post
I just posed this request to ChatGPT:

Demonstrate "lateral thinking" to develop an innovative means of capturing astrophotography

This was the response:

One approach to demonstrating lateral thinking in astrophotography might involve rethinking the tools and techniques used for capturing images. Instead of relying on traditional telescopes and cameras, for example, one might experiment with new technologies such as:

Drones: Equipping drones with cameras and flying them at high altitudes could provide a unique perspective for capturing astrophotography images.
Balloon-based cameras: Launching cameras attached to balloons into the stratosphere could also offer new angles and perspectives for capturing images of celestial objects.
Augmented reality: Developing an app that combines data from telescopes and cameras with augmented reality technology could create a new way of experiencing and capturing astrophotography images.
Multi-spectral imaging: Using sensors that can capture images across a range of wavelengths, beyond the visible light that traditional cameras pick up, could reveal previously hidden details in astronomical objects.


Nothing truly earth-shattering there - but some food for thought for hackers and makers?
Hi Julian,

Great to hear you have been playing with ChatGPT.

To help set your level of expectation, let me first of all make clear what
ChatGPT isn't.

It isn't sentient and though it often demonstrates remarkable output,
including original stories, the answers to an astonishing number of facts
and so on, it is not designed to be capable of "lateral thinking" in the
sense you and I know.

So what is it? Well, it is a sneak preview of a research language model.
It happens to have been trained on an enormous amount of text, but right
now it isn't as important as to what it knows as to the fact it can have
an extended conversation at all.

You might remember the old TV show, Mr. Ed, the talking horse.
Mr. Ed was smart but would only ever talk to his owner, Wilbur.

Now imagine one day a neighbour calls out to you. "Psst, Julian. Get
over here. I have this horse that can talk."

You say, "Wow. That's incredible! Can I talk to him?"

And the neighbour says, "Sure, come over here and sit in the barn and
why don't you have a chat with him? He knows a lot of stuff. He would
do really well on a quiz show like Jeopardy. Knows capital cities, history
of nations, cellular biology. Well, in chatting to him, he seems to know an
awful lot more facts than me."

So you sit down in the barn and chat all day with him.

You come out of the barn and the neighbour says, "Well, what do think?
Incredible eh?" And you respond, "He couldn't give me a good lateral
thinking approach to astrophotography".

But then the neighbour says, "But he's a talking horse ..."

It's easy to forget what the current release of ChatGPT is attempting
to demonstrate, which is that you are actually having a conversation
with a talking computer
Reply With Quote
  #43  
Old 05-02-2023, 11:26 PM
glend (Glen)
Registered User

glend is offline
 
Join Date: Jun 2013
Location: Lake Macquarie
Posts: 7,121
Microsoft has just announced that their search engine Bing, will have ChatGPT-4 shortly, and it will be online on the internet, no archived dataset.
Details here:


https://medium.com/@owenyin/scoop-oh...t-62b42d8d7198

Last edited by glend; 06-02-2023 at 01:19 AM.
Reply With Quote
  #44  
Old 06-02-2023, 01:18 AM
glend (Glen)
Registered User

glend is offline
 
Join Date: Jun 2013
Location: Lake Macquarie
Posts: 7,121
Seinfeld (continuous) as imagined by GPT3.

https://amp.theguardian.com/tv-and-r...forever-twitch
Reply With Quote
  #45  
Old 06-02-2023, 09:35 AM
AdamJL
Registered User

AdamJL is offline
 
Join Date: Sep 2010
Posts: 2,154
Quote:
Originally Posted by glend View Post
Microsoft has just announced that their search engine Bing, will have ChatGPT-4 shortly, and it will be online on the internet, no archived dataset.
Details here:


https://medium.com/@owenyin/scoop-oh...t-62b42d8d7198
hopefully something to blunt the power and pervasiveness of Google.

And sadly, I don't think we can sign up here in Australia, but I'll definitely be doing so as soon as I can.
Reply With Quote
  #46  
Old 06-02-2023, 09:37 AM
AstroViking's Avatar
AstroViking (Steve)
Registered User

AstroViking is offline
 
Join Date: Mar 2022
Location: Melbourne
Posts: 1,243
FWIW, I work in a technical field (very heavily IT) where I consider a wide range of knowledge to be essential.

The newer members of the team have, to my mind, a sad lack of what I would consider to be basic skills for the field. For example - the newest member of our team is studying for a PhD in the field, but knows nothing about networking.

Quote:
Originally Posted by Boozlefoot View Post
and reflect with considerable dismay the forthcoming possible lack of ability in all the basics by coming generations.
Reply With Quote
  #47  
Old 06-02-2023, 01:03 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
Quote:
Originally Posted by OzEclipse View Post
I have been working with Terry Cuttle on a series of articles about solar eclipses and the 5 Australian eclipses for the AAQ website.

I have mostly been doing a lot of proofing and editing of Terry's articles. Terry asked me to write a 3-page article on eclipse photography. Easy I thought. Well no as it turns out. I ran into what you might call "reverse writers' block," or perhaps, "writers' verbose diarrhoea."

My first draft was a whopping 17 pages. I viciously edited it down and I only got it down to 9 pages then I stalled and could cut no more.

So, I asked ChatGPT to write the article for me. So, I asked, "How do I photograph a total solar eclipse?
Hi Joe,

Here's a tip.

Rather than ask ChatGPT to write a one page original work on some very
technical esoteric subject, you will sometimes get a better result by
prompting it with, "Shorten the following report to a one page quick
technical how-to guide" and then copying and pasting what you have
written after that.

Now at 9 pages it is very likely that it will exceed the maximum number
of input tokens (essentially characters), but if your document was split
into multiple sections, you might be able to feed in a bit at a time.

Just like you yourself went from a draft of 17 pages down to 9 but balked
at being able to get it below that, it probably means your
essential facts to verbiage ratio was nearing unity and so ChatGPT may
do no better with a result missing important stuff.

Let's face it. Say you are sitting on a plane flying to West Australia
and some stranger asks if you are going on a holiday. You tell them
you are going to photograph a solar eclipse. They then ask, "Do you
need to do anything special to photograph them? How do you go
about that?" So you passionately launch into the full spiel including all
your hard won tips to "open up to f5.8-f8 where lens MTF is highest and
resolution sharpest" and so on.

Meanwhile, if you take notice, the stranger's eyes are starting to glaze
over.

When they asked, they were thinking, "Can I somehow snap this
with my phone?"

The reason I mention this is yesterday I wrote an overly long
post attempting to explain what is called the attention mechanism
within ChatGPT that attempts to keep it on topic.

A bit like the entry for "Earth" in the Hitchhikers Guide To the Galaxy
that originally read "Harmless" and was then revised to "Mostly harmless",
I guess it comes down to your target audience.

Believe me. Anyone who is serious enough to travel half way around the
world or cross a continent to get a great image of a solar eclipse will
not hesitate to read 17 pages. So my best tip of all is to keep it like
you have it. Besides, they have plenty of time to read it on the plane.

That hypothetical stranger on the plane was looking for totally
different advice.

Though trained on a large set of text, the current purpose of ChatGPT
was not to be the oracle of all knowledge but a demonstration of a language
model.

The other day I watched a builder using the lithium battery
out of his portable drill to hammer in a nail.

I mention it of course as reference to the old adage of the right tool
for the right job.

So hence as a language model ChatGPT is probably better at word crafting
what you as an expert had written than it is at hammering out an original
document of a technical nature. One day, maybe, but not now.

By the way, when I read the acronym you wrote of MTF, I had no idea
want it meant.

Having said it is not an oracle of all knowledge, it is amazing what
ChatGPT does know.

I asked ChatGPT :-

In the following sentence :-

"open up to f5.8-f8 where lens MTF is highest and resolution sharpest"

What does MTF stand for?

Quote:
Originally Posted by ChatGPT
MTF stands for Modulation Transfer Function.
What does it indicate?

Quote:
Originally Posted by ChatGPT
The Modulation Transfer Function (MTF) is a measure of the image quality of a lens. It indicates how well the lens is able to transfer the contrast of an object in the scene to the image. The MTF plot shows the relationship between spatial frequency and contrast, and provides information about how much of the original detail of an object is retained in the final image. A higher MTF value indicates that the lens is able to retain more detail, resulting in sharper and more detailed images.
Does it use a Fourier transform?

Quote:
Originally Posted by ChatGPT
Yes, the Modulation Transfer Function (MTF) is calculated using a Fourier Transform. The Fourier Transform is used to analyze the spatial frequency content of an image and to determine how well the lens is able to transfer contrast from an object to the image. By calculating the MTF, lens manufacturers can quantify the performance of their lenses and determine the optimal aperture setting for the highest image quality. The MTF measurement is usually performed at different spatial frequencies and plotted as a graph to provide a visual representation of the lens's performance.
Is it plotted in the complex plane?

Quote:
Originally Posted by ChatGPT
The Modulation Transfer Function (MTF) is not plotted in the complex plane. Instead, it is usually plotted as a graph with spatial frequency on the x-axis and contrast on the y-axis. The contrast is typically expressed as a percentage and is measured at different spatial frequencies. The graph shows how well the lens is able to transfer contrast from an object to the image as a function of spatial frequency, and provides information about the sharpness and detail of the lens. The MTF measurement can be used to compare different lenses and to determine the optimal aperture setting for the highest image quality.
So would something like a point spread function be utilized with the MTF Fourier transform?

Quote:
Originally Posted by ChatGPT
Yes, the Point Spread Function (PSF) is related to the Modulation Transfer Function (MTF) and can be used in conjunction with the Fourier Transform to analyze the performance of a lens. The PSF describes how an idealized point source of light is spread out as it passes through the lens and is used to calculate the MTF. The MTF is the Fourier Transform of the PSF and provides information about how much contrast is retained at different spatial frequencies. By analyzing the PSF and the MTF, lens manufacturers can determine the sharpness and detail of their lenses and optimize the design for the highest image quality.
So in a sense the lens is acting as a bandpass filter at certain frequencies and this is less than ideal for uniform contrast?

Quote:
Originally Posted by ChatGPT
Yes, that's correct. A lens can be thought of as a filter that selectively passes certain frequencies of light and reduces the contrast at other frequencies. The Modulation Transfer Function (MTF) provides a quantitative measure of this effect and shows how well the lens is able to transfer contrast from an object to the image. If the MTF is low at certain spatial frequencies, it means that the lens is not effectively passing the information at those frequencies and the contrast in the final image will be reduced. This can result in images that appear soft or have less detail. A higher MTF value indicates that the lens is able to retain more detail and provides a clearer, sharper image. In general, lens manufacturers aim to design lenses with high MTF values across a wide range of spatial frequencies to achieve the highest image quality.
Where a lens is selectively passing certain frequencies of light, fundamentally what is the physics of how that takes place? Is it at an atomic level?

Quote:
Originally Posted by ChatGPT
The physics of how a lens selectively passes certain frequencies of light occurs at the macroscopic level, although it is influenced by the atomic-level structure of the lens material. When light passes through a lens, it is refracted, or bent, by the lens material. The amount of bending depends on the refractive index of the lens material, which is a measure of how much the speed of light is slowed down in the material. The refractive index also varies with the wavelength of light, which means that different colors of light are bent by different amounts. This results in chromatic aberrations, where different colors of light are focused at different points, and is one of the factors that can impact the quality of the final image.

In addition to chromatic aberrations, the shape and curvature of the lens elements also play a role in determining the MTF. The lens elements are designed to have a specific shape and curvature to control the way that light is refracted and focused. This allows the lens to effectively pass certain frequencies of light and reduces the contrast at other frequencies. The combination of the lens material, the shape and curvature of the lens elements, and the aperture setting all work together to determine the final MTF of the lens.
So ideally a perfect lens would allow 100% of light to pass through it at all optical frequencies and hence essentially be a unit transfer? Ideally a flat, linear response?

Quote:
Originally Posted by ChatGPT
In theory, an ideal lens would allow 100% of light to pass through it without any loss of contrast or reduction in image quality at all optical frequencies. This ideal lens would have a Modulation Transfer Function (MTF) that is a flat, linear response with a value of 100% across all spatial frequencies. However, in practice, it is not possible to create a lens that meets this ideal specification. All lenses will have some level of aberrations, such as chromatic aberrations, spherical aberrations, and distortion, that impact the final MTF and result in a reduction in image quality. The goal of lens design is to minimize these aberrations and produce lenses with the highest possible MTF values across a range of spatial frequencies. This allows for the creation of lenses that produce sharp, detailed images with accurate color and minimal distortion.


Well, you can learn something new every day. Good luck with the eclipse.
Reply With Quote
  #48  
Old 06-02-2023, 01:28 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
As a demonstration of how ChatGPT can be used as a tool to do
word-smithing, I fed it my overly long post from yesterday and asked
it to write a 500 word version.

This summary omitted, for example, the reference to the seminal
"Attention Is All You Need" paper, the "Transformer" algorithm and so on.

But this is perhaps an example of where you can use it as a tool at work.
Your co-workers are after all the details, like references to papers.
The Pointy Haired Boss meanwhile might have requested a summary,
so it can help save you time from having to word-smith it down.


Quote:
Originally Posted by ChatGPT
ChatGPT is a state-of-the-art language model developed by OpenAI that has the capability to generate human-like responses to a wide range of questions and prompts. While it is often referred to as a sophisticated pattern matcher, it is significantly different from traditional pattern matchers, such as convolutional neural networks (CNNs), in many crucial ways.

To start, it is important to note that CNNs, when processing images, produce identical outputs every time they are given the same inputs. However, if you were to present identical text to ChatGPT in multiple chat sessions, there is no guarantee that the output would be the same each time. This is because the ChatGPT model is not a simple pattern matcher but a more complex neural network that generates its responses based on probabilities and context.

ChatGPT operates differently from traditional pattern matchers by processing text one word at a time and using probabilities to predict the most likely next word. This approach is necessary because of the enormity of the natural language processing problem. If a pattern matcher were to deal with phrases directly, it would be faced with an impractically huge number of inputs to process, since there are an infinite number of possible phrases in any given conversation.

To keep the conversation on topic, the model needs to distill the conversation so far into a smaller, more manageable form of data. This information is stored in a hidden state vector, which summarizes the conversation up to that point and allows the model to keep track of the context and make informed predictions about the next message. The hidden state is updated at each time step based on the input and the previous hidden state, making the system a state machine.

This type of neural network is known as a Recurrent Neural Network (RNN), which has been around since the 1980s. However, the challenge in using RNNs for language models is deciding what information to retain in the hidden state vector and what to discard. The hidden state must retain only the information that is important for keeping the conversation on track, and it must do so within the constraints of limited memory and processing capability.

In recent years, advances in deep learning and hardware capabilities have made it possible to develop large language models, such as GPT-2 and GPT-3, and the latest version, ChatGPT. These models are trained on massive amounts of text data and have the ability to generate human-like responses to a wide range of questions and prompts.

One of the key differences between ChatGPT and traditional pattern matchers is the way it generates its responses. Instead of simply producing a predefined response based on a set of rules or patterns, ChatGPT generates its responses based on a combination of the input and its understanding of the context of the conversation. This allows the model to produce responses that are not only technically correct but also contextually appropriate and human-like.

In conclusion, while ChatGPT is often referred to as a sophisticated pattern matcher, it is significantly different from traditional pattern matchers in many crucial ways. Its approach to natural language processing, through the use of Recurrent Neural Networks and its ability to distill conversation context into a hidden state vector, sets it apart as a state-of-the-art language model that has the ability to generate human-like responses. With its capacity to process vast amounts of text data and generate responses that are contextually appropriate, ChatGPT represents a significant step forward in the field of language modeling and artificial intelligence.
Reply With Quote
  #49  
Old 07-02-2023, 04:15 PM
oska (John)
Illucid

oska is offline
 
Join Date: Jun 2018
Location: Federal
Posts: 736
https://www.zdnet.com/article/chatgp...-ai-scientist/

Probably just a blowhard though
Reply With Quote
  #50  
Old 07-02-2023, 09:49 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
Google announces language model named Bard in response to ChatGPT

In a press release today, Google is introducing Bard, an experimental
conversational AI service using a large language model.

Google says Bard seeks to provide fresh, high-quality responses based on
information from the web.

Bard is powered by LaMDA, Google's next-generation language and
conversation technology, and is currently being opened up to trusted
testers before it becomes more widely available to the public in the
near future.

Google says it re-oriented the company around AI six years ago as it sees
AI as the most important way to deliver on its mission to make information
universally accessible and useful.

Press release here :-
https://blog.google/technology/ai/ba...earch-updates/
Reply With Quote
  #51  
Old 09-02-2023, 06:50 AM
Hans Tucker (Hans)
Registered User

Hans Tucker is offline
 
Join Date: Jul 2008
Location: Melbourne
Posts: 2,476
Interesting. Granted the source is a tabloid so facts are questionable, but the story raises an interesting issue.

https://www.dailymail.co.uk/news/art...ing-grade.html
Reply With Quote
  #52  
Old 09-02-2023, 09:48 AM
Hemi
Registered User

Hemi is offline
 
Join Date: Dec 2010
Location: Darwin
Posts: 608
I’ve been away for a while with other work/life commitments (and terribly demotivating weather), but this thread has been a great (but very long) read.

I went to a Surgical conference a few months ago entirely dedicated to AI in Surgery.I was fascinatied and amazed to see the sophisticated and novel uses of this tech in medicine already, in what is still a very immature technology/science.

The one thing that stuck with me, and very obvious in hindsight, is that virtually all of this still has the same biases (ie jumping to possibly inappropriate conclusions) as the society which created the tech.

The answers that come out in whatever application (or chat bot), will have the same gender, age, race, religious biases that are prevalent in society.

Interesting…

I’m with Adam though, I’m very excited by the whole thing. What an amazing time to be living in this simulation.
Reply With Quote
  #53  
Old 09-02-2023, 10:56 AM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
School's out for summer

You can get it to write your school essays or there is also plan B.
Attached Thumbnails
Click for full-size image (permission_note.jpg)
170.1 KB45 views
Reply With Quote
  #54  
Old 10-02-2023, 06:09 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,080
GitHub Copilot

That looks interesting.

https://github.blog/2022-06-21-githu...ll-developers/
Reply With Quote
  #55  
Old 10-02-2023, 06:24 PM
AdamJL
Registered User

AdamJL is offline
 
Join Date: Sep 2010
Posts: 2,154
Certainly does, Marc! I wonder if it will make coding easier.
Reply With Quote
  #56  
Old 11-02-2023, 11:30 AM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
Quote:
Originally Posted by multiweb View Post

Hi Marc,

AI assisted code writing is absolutely going to become a hundred billion dollar market in the years ahead.

However, GitHub's Copilot is already controversial when it was commercialized.

Last year a copyright lawsuit was filed against them.

Claim is that Copilot was trained on some code that was copyrighted under
various open source licenses. So for example if one those licences had
been the restrictive greater GPL, the question in law becomes is the
outputted code from Copilot tainted with GPL? If you then use that
code as part of your company's closed source software, have you
potentially tainted that as well?

Stories and opinion pieces here :-
https://www.theregister.com/2022/11/...pilot_opinion/

https://felixreda.eu/2021/07/github-...our-copyright/

https://www.theverge.com/2022/11/8/2...-training-data


Now where it became really controversial is in December 2022 when
Tim Davis, a computer science professor at Texas A&M University, found
he could get Copilot to emit large chunks of his copyrighted code that it had
slurped up in the training set, but missing all of the attribution and
license comments.

https://techcrunch.com/2022/12/08/gi...in-unresolved/

Quote:
Originally Posted by Tim Davis Twiiter post
@github copilot, with "public code" blocked, emits large chunks of my copyrighted code, with no attribution, no LGPL license. For example, the simple prompt "sparse matrix transpose, cs_" produces my cs_transpose in CSparse. My code on left, github on right. Not OK.
Screenshot image below.

Now any professional software engineer called upon to give expert
testimony is going to put their hand on a copy of K&R and swear to tell the
truth and state that in their professional opinion, that's extremely likely to be
a lift of Davis's code with trivial changes.
Attached Thumbnails
Click for full-size image (code.jpg)
162.7 KB28 views
Reply With Quote
  #57  
Old 11-02-2023, 12:59 PM
raymo
Registered User

raymo is offline
 
Join Date: Sep 2011
Location: margaret river, western australia
Posts: 6,070
I wonder where the original thread went.
raymo
Reply With Quote
  #58  
Old 11-02-2023, 01:50 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,999
Hi Raymo,

Quote:
Originally Posted by raymo View Post
I wonder where the original thread went.
raymo
Quote:
Originally Posted by glend (Glen)
Over the past few months, since the release of Open AI with CHAT GPT, I have watched countless examples of Python code generating art work, composing music, writing stories, and making the big leap into the provision of answers, outputs, and findings, which are significant jumps over what now seems like rudimentary software of now obsolete applications, including Google Et al.

The question I want to pose to the community here is:

Where is amateur astronomy going, in a new world order where AI is capable of finding anything in the sky, capturing it, rendering it, and displaying it, almost instantly compared to techniques which we might be using today?
I think it is important to instill into the mind's of interested readers
exactly where the state-of-the-art is at, otherwise the levels of
expectation they may set for themselves may exceed the actual
capabilities of this current run of tools.

The old adage, "Right tool for the right job", rings true.

Even one of the biggest corporations spectacularly screwed this up the
other day with Google's Twitter feed of a demo of their response
to ChatGPT, a language model named Bard.

When asked, "What new discoveries from the James Webb Space
Telescope can I tell my 9 year old about?", Bard responded with several
bullet points including the incorrect factoid that it took the very first
images of exoplanets.

In that instant, the share price of Google's parent company, Alphabet
plummeted and USD 100 billion in the company's market value was
lost, just like that.

In a rush to get it out there that Google was in the game too, the
corporation no doubt made that age old mistake. They left it to marketing.

Now a better, more factual announcement would have been to explain what
Bard is. An interactive chatbot.

Then using the best phrasing one could muster, put it to the market,
"Well we all know how great Google is. And we all know that Google
itself doesn't write the content. There is some great stuff on the web.
Then there is the stuff that is wrong. Wading through that can be quite
a challenge. Though our new ChatBot is not an oracle of knowledge and
it will say things that are incorrect, it can provide you with another valuable
tool to perform a search with".

With uncurated or only partially curated content, the desirable goal of
having systems that are factually correct every time is a highly desirable goal.
For the time being, the great achievement is that we have created machines that
are highly conversant in written language.
Reply With Quote
  #59  
Old 13-02-2023, 10:47 AM
ReidG
Registered User

ReidG is offline
 
Join Date: Nov 2021
Location: Brisbane Qld
Posts: 39
Early days warning

AI like ChatGPT shows great potential but it has some problems at this early stage.


MS Bing ventured into the future to report on the US SuperBowl before the event. it was not asked for a prediction.
Watch for the results and see what really happened.



https://apnews.com/article/kansas-ci...abc6a5d9e6b23a
Reply With Quote
  #60  
Old 13-02-2023, 08:30 PM
LewisM's Avatar
LewisM
Novichok test rabbit

LewisM is offline
 
Join Date: Aug 2012
Location: Somewhere in the cosmos...
Posts: 10,389
There will always be those who persevere with the old, for personal satisfaction and just because.

To paraphrase John F. Kennedy : "We choose [these things]...not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too."

AI and so on serve immeasurable purposes, yet so do time honoured skills and techniques. The human touch that AI will never have no matter how hard proponents claim "AI made by humans = human", whereas in reality ne'er the 'twain shall meet.

There will also always be show ponies and grandiose chest beaters demanding of public attention and the showiest bling one can imagine - and sometimes it is imagined - and there will be the meek and humble doing things for their own enjoyment and gratification. And these two distinct parties will also choose a methodology, and both shall be proponents thereof, and therein lies the joy of humanity - the right to choose, the right to self expression, and the right to love or loathe - to have opinion and critique. There is no right or wrong.

Am I a fan of the modern point and click insta-APOD or Facebook-Malin? No, but let's let others chose that path should they wish. The old ways will not die - we see that in revived/reviving and continuing age old traditions via the master craftsmen.

Evolve and grow, expand, but always remember the past.
Reply With Quote
Reply

Bookmarks


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 10:52 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement