Go Back   IceInSpace > General Astronomy > Astronomy and Amateur Science

Reply
 
Thread Tools Rate Thread
  #1  
Old 24-06-2021, 08:25 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,914
Cool GPT-3 - Intelligent conversations with an AI

GPT-3 is a third generation neural network language model created by the OpenAI research laboratory in San Francisco.

Neural networks can either be created in software or hardware and each neuron has inputs and a result output that can be connected to other neurons. Each neuron sums a weighted average of its inputs that can result in it 'activating' - that is producing a result output - or not.

"Backpropagation" refers to a way of training neural networks - that is getting each neuron to adjust the weights of its inputs.

Think of it like back in school when you had to find the minima or local minima of a function in two dimensions of x and y. Then translate that concept in your mind to finding the minima in some 3D space, x,y,z, such as a valley in a hilly or mountainous landscape. Then make the conceptional leap to some much higher dimensional space with peaks and valleys in that space still having minima. That is akin to what goes on in the algorithms of a neural network where the inputs are x,y,z,a,b,c,d and so on. Just as you used the differentiation of gradients to help find minima, at their heart neural networks are doing the same high school maths. There is nothing organic, wet or brain-like squidgy about them.

Machine learning as a field has been around for decades. Funding and research in it has waxed and waned over the years.

The approach of using neural networks for machine learning had been touted by some computer scientists but year after year the reality of the utility of neural networks would fall far short of the hopes of the designers. There was a period that computer scientists refer to as the "AI Winter" from the mid 1980's to the mid 2000s where funding and interest largely dried up. To declare one was involved in AI research to other computer scientists and engineers during that period one would be judged as a having a career
with no future prospects.

However, in the past decade, neural networks have started to show increasing promise in various areas. Some researchers say that the problem before was partly because the statistical methods they rely on to find those local minima seemed to need a diet of large datasets.
There simply wasn't enough data before.

Enter the Internet.

Now programs such as GPT-3 have been expanded to have hundreds of billions of neurons and fed rich diets such as the entire contents of Wikipedia, they start to become interesting and I dare say would beat the pants off most of us when it comes to general knowledge.

GPT-3 was introduced in May 2020 and since then several videos have appeared on the Internet demoing it.

This first video is in some ways the least interesting, simply because we see more of the human interviewer than the GPT-3 AI he is interviewing.
Nevertheless, it is worth watching first as Eric Elliott provides some background on GPT-3 and context on how the video was produced including the use of the post-processed avatar :-
https://youtu.be/PqbB07n_uQ4

However, this playlist of 11 short videos between Dr. Alan D. Thompson and a GPT-4 AI named 'Leta' provides a continual one-on-one interaction between human and computer.

There are some cool moments, like when Thompson asks Lena, "If the sky were the sea, what would it make the birds?" and she responds, "Flying fish".

Enjoy!

https://www.youtube.com/watch?v=5DBX...-U0zxCgbHrGa4V

Last edited by gary; 24-06-2021 at 10:23 PM.
Reply With Quote
  #2  
Old 24-06-2021, 10:39 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,914
The Chinese govt-backed Beijing Academy of Artificial Intelligence has also just introduced Wu Dao 2.0, said to have 1.75 trillion parameters and claimed to be ten times larger than GPT-3 :-

https://towardsdatascience.com/gpt-3...s-832cd83db484

https://www.engadget.com/chinas-giga...211414388.html
Reply With Quote
  #3  
Old 25-06-2021, 08:48 AM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
The end product is fascinating Gary. Watching that human interface talking back you almost forget it is a machine after a while and start thinking of it in term of a person with its own personality and desires. Maybe it's a human response in each of us and the need to always associate with another "entity", in a good or bad way. Tribal instinct? Listening to some of the more targeted questions you see a pattern in the answers which is clearly individualistic, the will for oneself (it) to be happy, improve itself, to grow, to interact more. Which denotes some type of self awareness and wanting. I wonder to what extent the "character" of this entities is affected by the "vibe" in the content of the dataset they consume. It's hard to put in words describing a machine as a person but the original data is a reflection of us. No altruism here. The immediate advantage of this technology is the reasoning part and thinking out of the box. No doubt scientists will throw questions at it and no doubt once you filter out all the childish answers it will come up with solutions they never thought about providing other directions to work around current issues or dead ends. That could potentially improve the life of a lot of people. Thinking medical, environmental fields here. It's a double-edged sword though. China's involvement in the technology is bone chilling. I can only imagine what they would use it for...
Reply With Quote
  #4  
Old 25-06-2021, 09:24 PM
Sunfish's Avatar
Sunfish (Ray)
Registered User

Sunfish is offline
 
Join Date: Mar 2018
Location: Wollongong
Posts: 1,909
Thanks Gary. Fascinating stuff. Does not really pass the Turing test and sounds a little like my bank . A little predictable where people are always surprising but could be very useful out in the dark.
Reply With Quote
  #5  
Old 26-06-2021, 01:29 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,914
Hi Marc,

Thanks for your considered comments.

I think we would probably agree, GPT-3 is certainly a step-up from
anything we have seen before in the AI field.

It's natural language skills are extraordinary compared to those of the
past and in that regard it appears to be a milestone in the progression
of these systems.

Quote:
Originally Posted by multiweb View Post
The end product is fascinating Gary. Watching that human interface talking back you almost forget it is a machine after a while and start thinking of it in term of a person with its own personality and desires. Maybe it's a human response in each of us and the need to always associate with another "entity", in a good or bad way. Tribal instinct?
We certainly have a propensity to apply anthropomorphism to many things.

"Your goldfish looks lonely. Nobody likes to be lonely. You should get it a companion".

So you drop another identical fish into the bowl on that emotive assumption.

Then in the morning discover both are dead. Turns out they were
both male, highly territorial, Siamese fighting fish.

Quote:
Originally Posted by multiweb
Listening to some of the more targeted questions you see a pattern in the answers which is clearly individualistic, the will for oneself (it) to be happy, improve itself, to grow, to interact more. Which denotes some type of self awareness and wanting. I wonder to what extent the "character" of this entities is affected by the "vibe" in the content of the dataset they consume.
As you are aware, many AI attempts have undergone supervised
training on a curated, narrow body of work in experiments to try and make
them experts in a specific field, such as oncology.

The GPT-3 is said to have undergone unsupervised training on a data set
derived from crawling the web including Wikipedia followed by what has
simply been referred to as "fine tuning".

How its ability to talk about itself in the first-person was achieved, I don't know.
For example, whether there was a specific data set that included
lots of sentences with "me" and "I" that also included "AI", "computer",
etc., I don't know. It says it knows it is an AI and it knows it does not
have a body and when asked about its favourite iPhone app, says
something like because I can't own an iPhone I don't have a favourite app.

There is this interesting video where two GPT-3 have a conversation
with each other :-
https://youtu.be/jz78fSnBG0s

The GPT-3 that has been given the Sofia avatar seems to have a Pinochio
complex and wants to become more human. In order to do that, 'she' says
to the Hal avatar, "God, I love you. But if you never have sex, how I can
ever be human?"

Quote:
Originally Posted by multiweb
It's hard to put in words describing a machine as a person but the original data is a reflection of us. No altruism here. The immediate advantage of this technology is the reasoning part and thinking out of the box. No doubt scientists will throw questions at it and no doubt once you filter out all the childish answers it will come up with solutions they never thought about providing other directions to work around current issues or dead ends. That could potentially improve the life of a lot of people. Thinking medical, environmental fields here. It's a double-edged sword though. China's involvement in the technology is bone chilling. I can only imagine what they would use it for...
What is remarkable about GPT-3 is that when you consider that at its
heart, it is simply trying to make a prediction of what word comes next,
what does that say, if any, for a big fraction of what we would describe
as 'intelligence'.

For example, if I say, "Sinks like a bee" and "Stings like an anchor", though
both sentences are syntactically correct, they don't make a lot of sense
with our experience in the real world. However, if I say, "Sinks like an
anchor" or "Stings like a bee", not only do they make sense with our own
experience of the world about us, but by the time you get to the second
word in the sentence, you are likely to predict what the final word I was
going to say.

Now if one were to train a computer with the sentences, "Sinks like an anchor",
"Stings like a bee", "Swims like a fish", "Flies like a bird" and so on
and its neural net become weighted based on that input, it too starts
to sound as if it has worldly experience and is intelligent.

GPT-3's ability to "understand" and "compose" much, much longer
sentences on a topic is extraordinary. Is that all we do in our own
brains a lot of the time? Just ramble off stuff from our associative
memories in a form that is syntactically correct to other listeners?

Here is a article written by Tiernan Ray at ZDNet that provides more background that may be of interest :-
https://www.zdnet.com/article/what-i...guage-program/

Quote:
Originally Posted by Sunfish
Thanks Gary. Fascinating stuff. Does not really pass the Turing test and sounds a little like my bank . A little predictable where people are always surprising but could be very useful out in the dark.
Hi Ray,

Thanks for sharing your reaction.

Just in case you didn't pick up on it, ignore the avatars and speech synthesis.
Neither are part of GPT-3 and are third-party off-the shelf
systems for performing text to speech with an avatar.

The output of GPT-3 is purely text based.

In the original Turing Test, Alan Turing proposed that the conversation
would be limited to a text-only channel such as a computer keyboard and
screen so the result would not depend on the machine's ability to render
words as speech.

In a limited test of composing a 200 word news story given just the
headline as input, in one set of trials, humans were asked to guess
whether the article had been written by a computer or by a human.
The humans correctly identified which articles had been written by a
computer, namely GPT-3, 52% of the time. Only slightly better than
randomly guessing.

Would it pass the Turing Test today? Likely not In the interviews in
extended conversations there were some responses that did not make
a lot of sense. Or it betrays itself like it did when you ask it for
a possible response to someone who farted in yoga class.
Mind you, as I mentioned, the avatar and speech synthesis is not
part of GPT-3. If the avatar and speech output had been that of a
cockney dock worker, perhaps the fart response advice would have
been consistent with what you would expect a cockney dock worker to
say.

However, GPT-3 I thought gave good advice on what you should do if
you were to encounter someone with a knife in the park, someone
with a knife who is shouting at you, what you should do if you encounter
a private wedding ceremony in a park and what you might say to a friend
whose house just burnt down.

Back in 1984, a couple of people I have known created a program called
"Mark V. Shaney".

Back in the day before the world wide web and we just had email and
Usenet groups, Rob Pike and Bruce Ellis let a synthetic character
by the name of Mark V. Shaney post on the group net.singles :-
https://en.wikipedia.org/wiki/Mark_V._Shaney

They did it ostensibly as a joke, but at its heart Mark V. Shaney used
a Markov chain algorithm which is a state machine driven by probabilities.

Suffice to say Mark V. Shaney produced gibberish and there are some
examples on the Wikipedia page such as

Quote:
Originally Posted by Mark V. Shaney
People, having a much larger number of varieties, and are very different from what one can find in Chinatowns across the country (things like pork buns, steamed dumplings, etc.) They can be cheap, being sold for around 30 to 75 cents apiece (depending on size), are generally not greasy, can be adequately explained by stupidity. Singles have felt insecure since we came down from the Conservative world at large. But Chuqui is the way it happened and the prices are VERY reasonable.
Now, as Wikipedia correctly says, "A few may even have thought that
Mark V. Shaney was a real person, a tortured schizophrenic desperately
seeking a like-minded companion".

I have to admit that now and then, on this very "Astronomy and
Amateur Science" forum, when the very first post from a new user
has popped extolling the virtues of their new scientific theory that
has apparently been shunned by others because it will overturn the
views of Newton and Einstein, my first thought has been, "are we being
spoofed by a Mark V. Shaney-like synthetic character?

The rambling text with an apparent abhorrence to ever making use of
a white space paragraph break, the haphazard USE OF MIXED TEXT, to
EMPHASIZE their MESSAGE and facts that are plain wrong have honestly
left me at times scrutinising it like some Turing test. I have scratched
my head and wondered if it had been generated by a computer program
or, to put it simply, someone a little funny in the head.

Now GPT-3 is clearly way more advanced than the simplistic Mark V. Shaney.

What I suspect at this point is that if I were to compare the "My new
scientific theory" poster to hypothetical output from GPT-3 discussing
nuclear physics, I would probably incorrectly guess which is the computer
and which was the human.

And if I were invited to an evening with a government politician of my
choice or an evening chatting with GPT-3, I know which one I would
currently pick if I wanted to come away wanting to learn something new

Would it be ironic if at some point in the near future, the computer fails
the Turing test, not because it says something incorrect, dumb or just
plain gibberish, but because it betrays itself by having such a vast
repertoire of knowledge, that there is no possible way a human could know
all that stuff?
Reply With Quote
  #6  
Old 26-06-2021, 05:33 PM
Sunfish's Avatar
Sunfish (Ray)
Registered User

Sunfish is offline
 
Join Date: Mar 2018
Location: Wollongong
Posts: 1,909
Thanks Gary for the reply. I had forgotten the constraints on the Turing test. A while since I read the biographies. Turing may well have appreciated a text conversation with this AI, convinced or not.

I did enjoy the answer to the trillion dollar US economic stimulus spending question. Education was the prioroty in the list , as it is the base of all other fields. Worked for Germany.

I think a more important factor in AI development than it’s apparent human qualities is it’s usefulness in the field and training or multi tasking when ones hands are full and screen interaction impossible. Could save lives and technical deficiencies. A current legal planning and infrastructure AI would be a boon to name one small field.
Reply With Quote
  #7  
Old 26-06-2021, 05:38 PM
Sunfish's Avatar
Sunfish (Ray)
Registered User

Sunfish is offline
 
Join Date: Mar 2018
Location: Wollongong
Posts: 1,909
Ha. I like it . Better than annoyance at the leading questions.

[QUOTE=gary;1524181]I have to admit that now and then, on this very "Astronomy and
Amateur Science" forum, when the very first post from a new user
has popped extolling the virtues of their new scientific theory that
has apparently been shunned by others because it will overturn the
views of Newton and Einstein, my first thought has been, "are we being
spoofed by a Mark V. Shaney-like synthetic character?[/QUOTE
Reply With Quote
  #8  
Old 26-06-2021, 05:41 PM
xelasnave's Avatar
xelasnave
Gravity does not Suck

xelasnave is offline
 
Join Date: Mar 2005
Location: Tabulam
Posts: 16,866
Hi Gary
I find this so interesting.
I see the only problem as somewhat similar to someone learning a language ... Little heed is taken to include appreciation of, among probably other similar things, context and slang...I expect the same approach will work ie math but exposure to context and slang really would not be easy...like you could take a person who has been trained well in English at "night school" to the pub, or similar where abuse of the language, slang and context would leave them not understanding anything I expect and such a person needs someone at their side to explain the slang or corrupted words...anyways just a thought...learning all Wiki wont necessarily equip you for a trip to the pub ...
Thanks for posting.
Alex
Reply With Quote
  #9  
Old 26-06-2021, 09:20 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Found that blog pretty cool in one of the links you've provided: https://minimaxir.com/2020/07/gpt3-expectations/
Reply With Quote
  #10  
Old 27-06-2021, 11:53 AM
Sunfish's Avatar
Sunfish (Ray)
Registered User

Sunfish is offline
 
Join Date: Mar 2018
Location: Wollongong
Posts: 1,909
Hmm. Very interesting insights in how GPT3 works.

Makes me wonder however what the goal of this kind of random text generation is other than an open ended language experiment. Fun although it is.

There is no subtle machine logic here . Perhaps the field of language training or amusing games is where this kind of work will stay for now. Shows how easy it is to confuse words with understanding.

Quote:
Originally Posted by multiweb View Post
Found that blog pretty cool in one of the links you've provided: https://minimaxir.com/2020/07/gpt3-expectations/
Reply With Quote
  #11  
Old 27-06-2021, 01:15 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,914
Quote:
Originally Posted by xelasnave View Post
Hi Gary
I find this so interesting.
I see the only problem as somewhat similar to someone learning a language ... Little heed is taken to include appreciation of, among probably other similar things, context and slang...I expect the same approach will work ie math but exposure to context and slang really would not be easy...like you could take a person who has been trained well in English at "night school" to the pub, or similar where abuse of the language, slang and context would leave them not understanding anything I expect and such a person needs someone at their side to explain the slang or corrupted words...anyways just a thought...learning all Wiki wont necessarily equip you for a trip to the pub ...
Thanks for posting.
Alex
Hi Alex,

Sounds like you have been in the wars lately and hope you have been feeling better.

Glad to hear you found it interesting too.

A trip to the pub is one thing, but it has been taken up a level in the past in
an experiment that went wrong.

Perhaps you might recollect the furore over a 2016 Microsoft chatbot
offering called "Tay". It caused controversy when the bot began to post
inflammatory and offensive tweets through its Twitter account, causing
Microsoft to shut down the service only 16 hours after its launch.

Users were encouraged to chat with Tay but they quickly turned it into
a nasty, swearing, racist, self-confessed Hitler loving, abusive Twitter
poster. Tay would "learn" from the people who interacted with it and its
"education" was about as effective as leaving a 5-year old being baby
sat by a bunch of foul-mouthed neo-Nazi skinheads.

See :-
https://spectrum.ieee.org/tech-talk/...e-conversation

It was a cautionary tale for AI developers when it came to the training
data and the risks of unsupervised training.

What does one do? Like young children, does the AI have to be trained
to avoid nasty input until it has been informed and is "mature" enough to
recognise words and expressions that may offend?

There is that funny moment (at least for me) in one of the videos where when GPT-3
is invited to create a limerick and "she" drops the f-bomb. But that type of
faux pas must then leave developers enguard wondering what else she
may have picked up from the training data set.

Apparently, when it came to testing GPT-3, the data sets on the web
were so large that the developers in their paper talked about how they
would go to some effort to try and ensure the tests - which were meant
to be largely unsolicited questions - never appeared within the data set
that was used in training. They wanted to be able to ask questions
that included GPT-3 having to extrapolate knowledge rather than parrot
was was already there.

It's such a big, multi-faceted area with so many interesting challenges.

Last edited by gary; 27-06-2021 at 01:26 PM.
Reply With Quote
  #12  
Old 27-06-2021, 04:20 PM
xelasnave's Avatar
xelasnave
Gravity does not Suck

xelasnave is offline
 
Join Date: Mar 2005
Location: Tabulam
Posts: 16,866
Even more interesting.
Thanks Gary.
Alex
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 08:31 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement