in short
As AI capabilities and applications continue to advance, the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (AI RMF) has become an important tool for organizations to responsibly develop and use AI systems. We reported on the AI RMF shortly after NIST released the framework in January 2023. Since then, the publication has grown in importance, with many U.S. executive orders, acts, and laws building on or incorporating the Framework.
U.S. law increasingly adopts NIST Artificial Intelligence Risk Management Framework as a standard
For example, the White House issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in October 2023. The order seeks to define the trajectory of AI adoption, governance, and use within the U.S. government and cites the AI RMF and other NIST resource endorsements. Similarly, California Governor Gavin Newsom issued an artificial intelligence executive order in September 2023 to promote the responsible use of artificial intelligence in public sector operations in California. Among other things, the order directs state agencies to develop guidelines for the use of AI in public sector procurement and generation based on AI RMFs. California has since released guidance on generative artificial intelligence for the public sector, which essentially draws on concepts and principles from the NIST AI RMF.
In the private sector, the California Frontier Artificial Intelligence Security Innovation Model Act requires AI developers and computing cluster operators to consider NIST guidance, and Governor Newsom has until September 30 to veto the bill or sign it. Likewise, Colorado’s AI consumer protections would require deployers of high-risk AI systems to implement risk management policies and plans that consider AI RMF. Notably, Colorado law provides organizations that comply with AI RMFs with an affirmative defense against regulatory enforcement. See our reviews of the California bill and Colorado law here and here . These developments highlight the growing trend among lawmakers to adopt the NIST AI RMF as a standard for artificial intelligence risk management.
But what does it mean to comply with AI RMF? The following summarizes some key points of AI RMF and provides some ideas for organizations to consider.
AI RMF aims to reduce risk and maximize AI credibility
The AI RMF recommends that the primary goal of a responsible AI actor (i.e., any entity that plays a role in the life cycle of an AI system) is to manage risks and maximize trustworthiness when developing and using AI systems. Risk is a function of the amount of damage that could be caused if an event occurred and the likelihood of that event occurring. The AI RMF describes the different hazards that can arise when developing and using AI (see Figure 1 of the AI RMF) and lists a dozen reasons why AI risks differ from traditional software risks (see Appendix B of the AI RMF). The AI RMF also lists the following characteristics of trustworthy AI:
- valid: The artificial intelligence system meets the requirements of its intended use or application.
- reliable: The artificial intelligence system is able to meet the defined conditions.
- safe: Artificial intelligence systems will not endanger human life, health, property or the environment.
- safe: AI systems maintain their functionality and structure (including confidentiality, integrity, and availability) in the face of internal and external changes.
- elastic: Artificial intelligence systems capable of restoring normal functionality after unexpected adverse events.
- responsible: Organizations should be held accountable for the consequences of AI systems.
- transparent: The organization should provide appropriate and customized information about the AI system and its output to individuals who interact with the system.
explainable: Organizations should be able to describe the mechanisms by which AI systems operate. - explainable: Organizations should be able to describe the meaning of AI system outputs within the context of the functional purpose for which they were designed.
- Privacy enhancement: Organizations should adhere to norms and practices that help preserve human autonomy, identity, and dignity, including enhancing freedom from intrusion and the agency of individuals to consent to the disclosure or control of their identity.
- Be fair and manage harmful bias: Organizations should adhere to values related to equality and fairness while addressing systemic, computational, statistical and human cognitive biases.
Reducing risk and maximizing credibility go hand in hand. The more an AI system embodies the above characteristics, the greater the potential for AI actors to identify and mitigate attendant risks. The AI RMF acknowledges that different organizations have different risk tolerances and priorities, but emphasizes that wider adoption of its principles will allow more of society to benefit from AI while also being protected from its potential harms.
Governance is key as it enables organizations to map, measure and manage AI risks
NIST describes four specific capabilities to help organizations address risks from AI systems. The first function, Governance, is at the heart of an effective risk management program because it enables the organization to advance the other three functions: Map, Measure, and Manage. The “governance” function requires an organization to establish and oversee the policies, processes, and practices that guide how it manages AI risks. The focus of “mapping” is to figure out the context, purpose, and potential risks of an AI system. “Measurement” means assessing and monitoring the performance, risks and impacts of AI systems to ensure they meet intended objectives and mitigate potential harm. “Management” means proactively implementing measures to prioritize, respond to and mitigate identified risks throughout the lifecycle of an AI system.
As a companion product to the AI RMF, NIST has published a handbook that lists numerous recommended actions to advance each of the four capabilities and the AI actors who should be involved in executing each action. NIST says, “The playbook is neither a checklist nor a complete set of steps to follow.” However, organizations seeking to comply with the AI RMF or to truly consider the framework comprehensively should probably take a closer look at each of the Playbook’s recommended actions, document Is the action relevant to the organization and why or not, and lists the means therein. This organization handles every relevant action. The organization can then comprehensively assess whether its actions and measures are sufficient to address each action in the playbook, implement a plan to address any gaps, and repeat the process regularly. Organizations should be careful not to waive attorney-client privilege or work product protections when seeking legal advice on how to mitigate AI risks.
AI RMF represents a solid starting point
AI RMF can help organizations structure and organize their compliance journey. Its accompanying Playbook lists numerous actions an organization may take to implement the AI RMF guidelines. However, the AI RMF and Playbook are designed to help organizations in almost any industry and for any use case. Accordingly, these documents contain a number of general statements, some of which may not apply to your organization’s proposed development or use of artificial intelligence systems. Once your organization has reviewed the AI RMF and playbook, you should consider how these tools specifically apply to your organization’s culture, practices, and risk management strategies. Other NIST publications may also provide organizations with more tailored and specific recommendations for specific AI use cases. For example, NIST’s Generative AI Profile examines how AI RMFs are particularly suited to the development and use of generative AI technologies. In addition, NIST plans to regularly update this publication to enhance its relevance and usefulness. Therefore, organizations should revisit this document periodically to ensure that their AI governance plans reflect the latest NIST guidance.
Leave a Reply Cancel reply
You must be logged in to post a comment.