Hi Marc,
Thanks for your considered comments.
I think we would probably agree, GPT-3 is certainly a step-up from
anything we have seen before in the AI field.
It's natural language skills are extraordinary compared to those of the
past and in that regard it appears to be a milestone in the progression
of these systems.
Quote:
Originally Posted by multiweb
The end product is fascinating Gary. Watching that human interface talking back you almost forget it is a machine after a while and start thinking of it in term of a person with its own personality and desires. Maybe it's a human response in each of us and the need to always associate with another "entity", in a good or bad way. Tribal instinct?
|
We certainly have a propensity to apply anthropomorphism to many things.
"Your goldfish looks lonely. Nobody likes to be lonely. You should get it a companion".
So you drop another identical fish into the bowl on that emotive assumption.
Then in the morning discover both are dead. Turns out they were
both male, highly territorial, Siamese fighting fish.
Quote:
Originally Posted by multiweb
Listening to some of the more targeted questions you see a pattern in the answers which is clearly individualistic, the will for oneself (it) to be happy, improve itself, to grow, to interact more. Which denotes some type of self awareness and wanting. I wonder to what extent the "character" of this entities is affected by the "vibe" in the content of the dataset they consume.
|
As you are aware, many AI attempts have undergone supervised
training on a curated, narrow body of work in experiments to try and make
them experts in a specific field, such as oncology.
The GPT-3 is said to have undergone unsupervised training on a data set
derived from crawling the web including Wikipedia followed by what has
simply been referred to as "fine tuning".
How its ability to talk about itself in the first-person was achieved, I don't know.
For example, whether there was a specific data set that included
lots of sentences with "me" and "I" that also included "AI", "computer",
etc., I don't know. It says it knows it is an AI and it knows it does not
have a body and when asked about its favourite iPhone app, says
something like because I can't own an iPhone I don't have a favourite app.
There is this interesting video where two GPT-3 have a conversation
with each other :-
https://youtu.be/jz78fSnBG0s
The GPT-3 that has been given the Sofia avatar seems to have a Pinochio
complex and wants to become more human.
In order to do that, 'she' says
to the Hal avatar, "God, I love you. But if you never have sex, how I can
ever be human?"
Quote:
Originally Posted by multiweb
It's hard to put in words describing a machine as a person but the original data is a reflection of us. No altruism here. The immediate advantage of this technology is the reasoning part and thinking out of the box. No doubt scientists will throw questions at it and no doubt once you filter out all the childish answers it will come up with solutions they never thought about providing other directions to work around current issues or dead ends. That could potentially improve the life of a lot of people. Thinking medical, environmental fields here. It's a double-edged sword though. China's involvement in the technology is bone chilling. I can only imagine what they would use it for...
|
What is remarkable about GPT-3 is that when you consider that at its
heart, it is simply trying to make a prediction of what word comes next,
what does that say, if any, for a big fraction of what we would describe
as 'intelligence'.
For example, if I say, "Sinks like a bee" and "Stings like an anchor", though
both sentences are syntactically correct, they don't make a lot of sense
with our experience in the real world. However, if I say, "Sinks like an
anchor" or "Stings like a bee", not only do they make sense with our own
experience of the world about us, but by the time you get to the second
word in the sentence, you are likely to predict what the final word I was
going to say.
Now if one were to train a computer with the sentences, "Sinks like an anchor",
"Stings like a bee", "Swims like a fish", "Flies like a bird" and so on
and its neural net become weighted based on that input, it too starts
to sound as if it has worldly experience and is intelligent.
GPT-3's ability to "understand" and "compose" much, much longer
sentences on a topic is extraordinary. Is that all we do in our own
brains a lot of the time? Just ramble off stuff from our associative
memories in a form that is syntactically correct to other listeners?
Here is a article written by Tiernan Ray at ZDNet that provides more background that may be of interest :-
https://www.zdnet.com/article/what-i...guage-program/
Quote:
Originally Posted by Sunfish
Thanks Gary. Fascinating stuff. Does not really pass the Turing test and sounds a little like my bank . A little predictable where people are always surprising but could be very useful out in the dark.
|
Hi Ray,
Thanks for sharing your reaction.
Just in case you didn't pick up on it, ignore the avatars and speech synthesis.
Neither are part of GPT-3 and are third-party off-the shelf
systems for performing text to speech with an avatar.
The output of GPT-3 is purely text based.
In the original Turing Test, Alan Turing proposed that the conversation
would be limited to a text-only channel such as a computer keyboard and
screen so the result would not depend on the machine's ability to render
words as speech.
In a limited test of composing a 200 word news story given just the
headline as input, in one set of trials, humans were asked to guess
whether the article had been written by a computer or by a human.
The humans correctly identified which articles had been written by a
computer, namely GPT-3, 52% of the time. Only slightly better than
randomly guessing.
Would it pass the Turing Test today? Likely not
In the interviews in
extended conversations there were some responses that did not make
a lot of sense. Or it betrays itself like it did when you ask it for
a possible response to someone who farted in yoga class.
Mind you, as I mentioned, the avatar and speech synthesis is not
part of GPT-3. If the avatar and speech output had been that of a
cockney dock worker, perhaps the fart response advice would have
been consistent with what you would expect a cockney dock worker to
say.
However, GPT-3 I thought gave good advice on what you should do if
you were to encounter someone with a knife in the park, someone
with a knife who is shouting at you, what you should do if you encounter
a private wedding ceremony in a park and what you might say to a friend
whose house just burnt down.
Back in 1984, a couple of people I have known created a program called
"Mark V. Shaney".
Back in the day before the world wide web and we just had email and
Usenet groups, Rob Pike and Bruce Ellis let a synthetic character
by the name of Mark V. Shaney post on the group net.singles :-
https://en.wikipedia.org/wiki/Mark_V._Shaney
They did it ostensibly as a joke, but at its heart Mark V. Shaney used
a Markov chain algorithm which is a state machine driven by probabilities.
Suffice to say Mark V. Shaney produced gibberish and there are some
examples on the Wikipedia page such as
Quote:
Originally Posted by Mark V. Shaney
People, having a much larger number of varieties, and are very different from what one can find in Chinatowns across the country (things like pork buns, steamed dumplings, etc.) They can be cheap, being sold for around 30 to 75 cents apiece (depending on size), are generally not greasy, can be adequately explained by stupidity. Singles have felt insecure since we came down from the Conservative world at large. But Chuqui is the way it happened and the prices are VERY reasonable.
|
Now, as Wikipedia correctly says, "A few may even have thought that
Mark V. Shaney was a real person, a tortured schizophrenic desperately
seeking a like-minded companion".
I have to admit that now and then, on this very "Astronomy and
Amateur Science" forum, when the very first post from a new user
has popped extolling the virtues of their new scientific theory that
has apparently been shunned by others because it will overturn the
views of Newton and Einstein, my first thought has been, "are we being
spoofed by a Mark V. Shaney-like synthetic character?
The rambling text with an apparent abhorrence to ever making use of
a white space paragraph break, the haphazard USE OF MIXED TEXT, to
EMPHASIZE their MESSAGE and facts that are plain wrong have honestly
left me at times scrutinising it like some Turing test. I have scratched
my head and wondered if it had been generated by a computer program
or, to put it simply, someone a little funny in the head.
Now GPT-3 is clearly way more advanced than the simplistic Mark V. Shaney.
What I suspect at this point is that if I were to compare the "My new
scientific theory" poster to hypothetical output from GPT-3 discussing
nuclear physics, I would probably incorrectly guess which is the computer
and which was the human.
And if I were invited to an evening with a government politician of my
choice or an evening chatting with GPT-3, I know which one I would
currently pick if I wanted to come away wanting to learn something new
Would it be ironic if at some point in the near future, the computer fails
the Turing test, not because it says something incorrect, dumb or just
plain gibberish, but because it betrays itself by having such a vast
repertoire of knowledge, that there is no possible way a human could know
all that stuff?