Someone’s prior beliefs about an artificial intelligence agent, like a chatbot, have a significant effect on their interactions with that agent and their perception of its trustworthiness, empathy, and effectiveness, according to a new study. Researchers from MIT and Arizona State University found that priming users — by telling them that a conversational AI agent for mental health support was either empathetic, neutral, or manipulative — influenced their perception of the chatbot and shaped how they communicated with it, even though they were speaking to the exact same chatbot. Most…