By Nina McCambridge

On Feb. 9, Carnegie Mellon announced that it joined the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium. The list of other organizations in the consortium is somewhat baffling: it includes everyone from the New York Public Library to Canva. Along with Carnegie Mellon, many other colleges and universities are represented. There are AI safety think tanks, AI companies, and companies from industries (software, computer security, finance, defense, etc.) in which AI could be used. There are identity-related groups like the Hispanic Tech and Telecommunications Partnership and Queer in AI. There are state government agencies from California and Kansas. Around 600 organizations requested to be part of the consortium, and around 200 have been selected so far. These organizations will advise NIST in their AI safety work.
NIST has created this consortium as part of their research on AI best practices pursuant to Biden’s October 2023 “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” This year, it is responsible for creating standards for safe and secure development of AI, benchmarks for evaluating AI, and testing environments for AI, among other tasks.
NIST has already been considering AI safety. In January 2023, it released the first version of its Artificial Intelligence Risk Management Framework (AI RMF). The framework suggests that those using AI should test it for trustworthiness, interpretability, privacy, and other positive characteristics, and that it should be possible to shut down. The AI RMF also suggests that AI systems should not have “harmful biases,” something they have written about elsewhere. It defines bias as a systematic deviation from the truth. However, it says that the fact that “it is possible to mathematically address statistical bias in a dataset, then develop an algorithm which performs with high accuracy, yet produce outcomes that are harmful to a social class and diametrically opposed to the intended purpose of the AI system,” is something to be mitigated, suggesting that it actually supports biases (systematic deviations from the truth) in some contexts. Carnegie Mellon, as a major center of AI research, held a workshop on this framework in August.
The framework describes itself as “intended to be flexible and to augment existing risk practices which should align with applicable laws, regulations, and norms.” Currently, the “applicable laws” are not related to AI specifically, but as the federal government expresses more interest in AI, laws may be forthcoming. NIST does much work on other digital-age standards. For instance, it is heavily involved in creating cybersecurity standards, and it is considering the cybersecurity implications of AI development.
Leave a Reply