Realistic sex chat bot
“Unlike human objects of affection, who might not return calls or emails, Xiaoice immediately responds to everyone.
This is a big part of her appeal, according to Li Di, manager of Microsoft’s Xiaoice artificial intelligence project.” The chatbot, according to Nikkei Asian Review, “has developed a sizable following among 18- to 30-year-olds,” most of whom are just looking for someone to talk to.
Applying game theory concepts, the chatter bot views each conversation in terms of seven potential levels.
In each successive level, the conversational partner shows more interest in the bot.
It may seem paradoxical, but this shift away from humanity might be what finally allows chatbots to succeed.
In 1966, long before Hoffer and his colleagues created Smarter Child, an MIT computer scientist named Joseph Weizenbaum published ELIZA, a program for mimicking human conversation.
Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior.
As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users.
Still, the bots you’re seeing today don’t much resemble Smarter Child and its predecessors—or , for that matter.
The news, weather, shopping, and customer service chatbots on Facebook Messenger don’t want to be your friend.
Tay was an artificial intelligence chatterbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.
Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".