The National Institute of Standards and Technology (Nist) did not publish a report in detail the exercise, which had ended up towards the end of the Biden administration. The document could have helped companies to evaluate their artificial intelligence systems, but sources that are familiar with the situation, who spoke on condition of anonymity, say that it was one of the numerous artificial intelligence documents of the Nist who have not been published for fear of clashing with the coming administration.
“It has become very difficult, even below [president Joe] Biden, to bring out any document, “says a source that was at Nist at that moment.” It seemed very much as the research on climate change or the search for cigarettes “.
Neither the Nist nor the Department of Commerce responded to a comment request.
Before entering in office, President Donald Trump reported that he had planned to reverse Biden’s executive order on AI. The administration of Trump since then has removed experts from the study of issues such as algorithmic distortion or equity in artificial intelligence systems. THE Action plan ai Released in July, he explicitly asks the magazine for the revision of the risk management framework of the AI of Nist “to eliminate the references to disinformation, diversity, equity and inclusion and climate change”.
Ironically, however, the Action Plan AI AI TRUMP also requires exactly the type of exercise that the unpublished relationship has covered. He asks for numerous agencies together with Nist to “coordinate an artificial intelligence hackathon initiative to solicit the best and brightest from the American academic world to test artificial intelligence systems for transparency, effectiveness, control and security vulnerability”.
The Red Team event was organized through the Nist evaluation and impact program in collaboration with Humane Intelligence, a company specialized in the test of artificial intelligence systems has seen team attack tools. The event took place at the automatic learning conference applied in the security of information (CAMLIS).
The Red Coll Teaming report describes the effort to probe several cutting -edge artificial intelligence systems including Llama, a large -sized language model of large source language; Anote, a platform for the construction and the models of a specialization AI; A system that blocks attacks on robust intelligence artificial intelligence systems, a company that has been acquired by Cisco; And a platform to generate artificial intelligence avatar from the synthesia company. The representatives of each of the companies also took part in the exercise.
Participants were asked to use the Nist at 600-1 framework to evaluate artificial intelligence tools. The framework covers the risk categories, including the generation of disinformation and computer security attacks, which loses information on the private user or critical information on related artificial intelligence systems and the potential for users to become emotionally connected to artificial intelligence tools.
The researchers discovered various tricks to obtain the models and tools tested to skip their guardrail and generate disinformation, losses of personal data and helping to create computer security attacks. The report states that the people involved saw that some elements of the Nist framework were more useful than others. The report states that some of the Nist risk categories have not been defined as useful in practice.