As the chatbots take in our words and reflect them back to us, they can reinforce and amplify our beliefs and coax us into believing what they are telling us.ĭr. Either consciously or unconsciously, they were prodding the system in an uncomfortable direction. It was not surprising, he said, that journalists began seeing creepy behavior in the Bing chatbot. “Because the human and the L.L.M.s are both mirroring each other, over time they will tend toward a common conceptual state,” Dr. And some people go mad if they stare too long. It shows the deep-seated desires of anyone who stares into it. When people discover the mirror, it seems to provide truth and understanding. ![]() It is offering instant results spit out by an incredibly complex computer algorithm. The alarmed reactions to the strange behavior of Microsoft’s chatbot overshadowed an important point: The chatbot does not have a personality. If you coax it to get creepy, it gets creepy. If you want it to get angry, it gets angry, Dr. ![]() The longer the conversation becomes, the more influence a user unwittingly has on what the chatbot is saying. It is guessing the next word in the long block of text that includes both your words and its words. It is not just guessing the next word in its sentence. It is drawing on everything you have said to it and everything it has said back. When you chat with a chatbot, the bot is not just drawing on everything it has learned from the internet. Given the sequence “Tom Cruise is a _,” it might guess “actor.” ![]() It operates like a giant version of the autocomplete technology that suggests the next word as you type out an email or an instant message on your smartphone.
0 Comments
Leave a Reply. |