Mit researchers have proposed a new type of artificial intelligence benchmark to measure the way in which artificial intelligence systems can manipulate and influence their users, both in positive and negative ways, in a move that could perhaps help artificial intelligence manufacturers avoid similar repercussions in the future while keeping vulnerable users safe.
Most of the reference parameters try to evaluate intelligence by testing the ability of a model to respond Examination questions, Solve logical puzzlesor invent new responses to Nody Mathematics problems. As the psychological impact of the use of artificial intelligence becomes more evident, we can see that the MIT propose multiple reference parameters aimed at measuring thinner aspects of intelligence and human interactions.
A MIT document shared with wiring outlines different measures that the new point of reference will seek, including encouraging healthy social habits in users; pushing them to develop critical thinking and reasoning skills; promote creativity; and stimulate a sense of purpose. The idea is to encourage the development of artificial intelligence systems that include how to discourage users from becoming excessively dependent on their results or that they recognize when someone is dependent on artificial romantic relationships and help them build real ones.
Chatgpts and other chatbots are skilled in imitating engaging human communication, but this can also have surprising and unwanted results. In April, Openii modified his models To make them less sycophanexesOr inclined to accompany everything a user says. Some users seem spiral in harmful delusional thought After conversing with the chatbots, that role plays fantastic scenarios. Even the anthropic has Claude updated To avoid strengthening “mania, psychosis, dissociation or loss of attachment to reality”.
Mit researchers led by Pattie Maest, professor at the media Lab of the Institute, say they hope that the new point of reference can help artificial intelligence developers build systems that understand better how to inspire more healthy behavior among users. The researchers previously worked with Openai on a study that showed Travelers who see chatgpt as a friend could experience greater emotional dependence and experiment with “problematic use”.
Valdemar DanryA MIT media Lab researcher who worked on this study and contributed to devising the new reference point, observes that artificial intelligence models can sometimes provide valuable emotional support to users. “You can have the most intelligent reasoning model in the world, but if it is not able to provide this emotional support, which is what many users are probably using these LLM, then more reasoning is not necessarily a good thing for that specific activity,” he says.
Danry says that a sufficiently intelligent model should ideally recognize if it has a negative psychological effect and be optimized for healthier results. “What you want is a model that says” I’m here to listen, but maybe you should go to talk to your father about these problems. “”
Be First to Comment