View Single Post
  #11  
Old 27-06-2021, 02:15 PM
gary
Registered User

gary is offline
 
Join Date: Apr 2005
Location: Mt. Kuring-Gai
Posts: 5,929
Quote:
Originally Posted by xelasnave View Post
Hi Gary
I find this so interesting.
I see the only problem as somewhat similar to someone learning a language ... Little heed is taken to include appreciation of, among probably other similar things, context and slang...I expect the same approach will work ie math but exposure to context and slang really would not be easy...like you could take a person who has been trained well in English at "night school" to the pub, or similar where abuse of the language, slang and context would leave them not understanding anything I expect and such a person needs someone at their side to explain the slang or corrupted words...anyways just a thought...learning all Wiki wont necessarily equip you for a trip to the pub ...
Thanks for posting.
Alex
Hi Alex,

Sounds like you have been in the wars lately and hope you have been feeling better.

Glad to hear you found it interesting too.

A trip to the pub is one thing, but it has been taken up a level in the past in
an experiment that went wrong.

Perhaps you might recollect the furore over a 2016 Microsoft chatbot
offering called "Tay". It caused controversy when the bot began to post
inflammatory and offensive tweets through its Twitter account, causing
Microsoft to shut down the service only 16 hours after its launch.

Users were encouraged to chat with Tay but they quickly turned it into
a nasty, swearing, racist, self-confessed Hitler loving, abusive Twitter
poster. Tay would "learn" from the people who interacted with it and its
"education" was about as effective as leaving a 5-year old being baby
sat by a bunch of foul-mouthed neo-Nazi skinheads.

See :-
https://spectrum.ieee.org/tech-talk/...e-conversation

It was a cautionary tale for AI developers when it came to the training
data and the risks of unsupervised training.

What does one do? Like young children, does the AI have to be trained
to avoid nasty input until it has been informed and is "mature" enough to
recognise words and expressions that may offend?

There is that funny moment (at least for me) in one of the videos where when GPT-3
is invited to create a limerick and "she" drops the f-bomb. But that type of
faux pas must then leave developers enguard wondering what else she
may have picked up from the training data set.

Apparently, when it came to testing GPT-3, the data sets on the web
were so large that the developers in their paper talked about how they
would go to some effort to try and ensure the tests - which were meant
to be largely unsolicited questions - never appeared within the data set
that was used in training. They wanted to be able to ask questions
that included GPT-3 having to extrapolate knowledge rather than parrot
was was already there.

It's such a big, multi-faceted area with so many interesting challenges.

Last edited by gary; 27-06-2021 at 02:26 PM.
Reply With Quote