When Microsoft's nascent Bing chatbot turns testy or even threatening, it’s likely because it essentially mimics what it learned from online conversations, analysts and academics said. Tales of disturbing exchanges with the artificial intelligence chatbot, including it issuing threats and speaking of desires to steal nuclear code, create a deadly virus, or to be alive, have gone viral this week. "I think this is basically mimicking conversations that it's seen online," Graham Neubig, an associate professor at Carnegie Mellon University's language technologies institute, said Friday. A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context. However, humans taking part in banter with programs naturally tend to read emotion and intent into what a chatbot says. "Large language models have no concept…