NIST’s Current Challenges to AI Risk Assessments

Authored by By Ezinne Egbo

While artificial intelligence (AI) and machine learning (ML) are integrating day to day life, from digital assistants to image, video, and language processing, there is still no formal and finalized risk assessment process for the technology – but NIST is in the process of changing that. In March 2022, NIST released their initial draft of the AI Risk Management Framework and they are aiming to publish version 1.0 of the AI Risk Management Framework by January 2023. So far, the institute has created the map, measure, manage, and govern functions to organize AI risk management. The map function establishes context and frames risks to AI systems. The measure function includes analyzing risks and impacts, and the manage function prioritizes risks. The govern function is a cross-cutting function for cultivating a risk management culture.

But AI risks and incidents resulting in harm are continuously being identified, thus complicating NIST’s framework. Within the framework’s initial draft NIST states, “While AI benefits and some AI risks are well-known, the AI community is only beginning to understand and classify incidents and scenarios that result in harm.” This leads to trouble within risk management because if a risk cannot be identified, it cannot yet be measured quantitatively or qualitatively. So far, the organization has identified three examples of potential harm to consider: harm to people, harm to an organization/enterprise, and harm to a system. The institute has found that harm to people, for example, can be as adverse as infringing on people’s civil liberties or discriminating against protected classes such as race and gender. An additional challenge to AI risk management NIST has identified is that the risks AI yields can be largely dependent on time. For example, AI may produce different results earlier in its lifecycle compared to later. This is due to machine learning which involves the AI processing training data to continuously evolve through trial and error.

The institute plans to hold their final virtual workshop to develop the framework on Tuesday and Wednesday, October 18-19. Stakeholders such as professionals with an understanding of AI concepts and risk issues as well as advocates from communities impacted by AI are invited to attend and participate. As January 2023 approaches, it will be interesting to see how NIST incorporates this feedback from stakeholders to reconcile challenges to AI risk management in the first published version of their framework.

  • 704-816-8470

Javier is a principal within the Cybersecurity Services Group at CLA. Prior to joining CLA, Javier spent ten years supporting the Department of Defense as well as a financial services company in the fields of insider threat, incident response, analytics, and systems engineering.

Comments are closed.