The National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.
NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.” Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).
The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.
According to the proposed AI RMF, the specific focus of this new framework is an AI system engineered on a machine-based system that can, “for a given set of human-defined objectives, generate outputs such as predictions, recommendations or decisions influencing real or virtual environments.”
Amidst the growth of artificial intelligence, the AI RMF provides guidance on how to use AI in a respectful and responsible manner. Cybersecurity frameworks are designed to secure and protect data, and the AI RMF draft appears to complement that goal.
One of the many objectives of the AI RMF is to better clarify and design NIST’s “AI Lifecycle.” The current AI Lifecycle focuses on overall risk management issues. The main audience for this framework, as drafted, are those with responsibilities to commission or fund an AI system as well as those who are part of the “enterprise management structure” that work to govern the AI Lifecycle.
For example, as part of the proposed AI RMF, NIST has defined “stages” for the new AI Lifecycle model. These elements include:
- Plan & Design
- Collect & Process Data
- Build & Use Model
- Verify & Validate
- Operate & Monitor
- Use or Impacted By
AI will impact many critical aspects of society over the next few years including the way we live and work. According to the World Economic Forum, up to 97 million new AI jobs could be created by the end of 2025. As AI continues to grow, it is critical to have a viable risk management framework in place.
A companion NIST AI RMF Playbook (Playbook) was published in conjunction with the second draft of the AI RMF. The Playbook is an online resource and “…includes suggested actions, references, and documentation guidance for stakeholders” to implement the recommendations in the AI RMF.
NIST will be holding a third and final virtual workshop on October 18-19, 2022, with leading AI experts and interested parties and expects the final AI RMF and Playbook to be published in January 2023.
We will continue to follow these developments and advise about updates as relevant.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.