NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment

Share

The National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.

NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.”  Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).

The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.

Continue reading “NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment”

The U.S. in the AI Era: the National Security Commission on Artificial Intelligence Releases Report Detailing Policy Recommendations

Share

On March 1, 2021, the National Security Commission on Artificial Intelligence (NSCAI) released its 700-page Final Report (the “Report”), which presents NSCAI’s recommendations for “winning the AI era” (The Report can be accessed here). This Report issues an urgent warning to President Biden and Congress: if the United States fails to significantly accelerate its understanding and use of AI technology, it will face unprecedented threats to its national security and economic stability. Specifically, the Report cautions that the United States “is not organizing or investing to win the technology competition against a committed competitor, nor is it prepared to defend against AI-enabled threats and rapidly adopt AI applications for national security purposes.”

In the Final Report, NSCAI makes a number of detailed policy recommendations “to advance the development of AI, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The Report, its findings and recommendations all signal deep concern that the U.S. has underinvested in AI and must play catch-up in order to safeguard its future.

Continue reading “The U.S. in the AI Era: the National Security Commission on Artificial Intelligence Releases Report Detailing Policy Recommendations”

New Executive Order on Maintaining American Leadership in Artificial Intelligence

Share

On February 11, 2019, President Trump signed an Executive Order on “Maintaining American Leadership in Artificial Intelligence.”  The Executive Order (EO) recognizes that the United States is the world leader in AI research and development (R&D) and deployment,” and that “[c]ontinued American leadership in AI is of paramount importance. . . .”

Continue reading “New Executive Order on Maintaining American Leadership in Artificial Intelligence”

FCC Announces its Agenda and Speakers for its AI and Machine Learning Forum

Share

On November 7, the FCCin a relatively terse Public Noticeannounced that it would hold a Forum at its headquarters on November 30 designed to focus on artificial intelligence (AI) and machine learning by having experts in AI and machine learning discuss the future of these technologies and their implications for the communications marketplace.

Continue reading “FCC Announces its Agenda and Speakers for its AI and Machine Learning Forum”

The FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool

Share

On November 7, Federal Communications Commission Chairman Ajit Pai issued a Public Notice announcing a first ever FCC Forum focusing on artificial intelligence (AI) and machine learning. This Forum will convene at FCC headquarters on November 30 and will feature experts in AI and machine learning discussing the future of these technologies and their implications for the communications marketplace.

Continue reading “The FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool”

US FDA Approaches to Artificial Intelligence

Share

Artificial Intelligence (AI) can be employed in a health care setting for a variety of tasks, from managing electronic health records at a hospital, to market research at a benefits management organization, to optimizing manufacturing operations at a pharmaceutical company. The level of regulatory scrutiny of such systems depends on their intended use and associated risks.

In the U.S., for medical devices using AI, one of the key regulatory bodies is the Food and Drug Administration (FDA), especially its Center for Devices and Radiological Health (CDRH). CDRH has long followed a risk-based approach in its regulatory policies, and has officially recognized ISO Standard 14971 “Application of Risk Management to Medical Devices.” That standard is over 10 years old now, and therefore is currently undergoing revisions – some of which are meant to address challenges posed by AI and other digital tools that are flooding the medical-devices arena.

Continue reading “US FDA Approaches to Artificial Intelligence”

©2024 Faegre Drinker Biddle & Reath LLP. All Rights Reserved. Attorney Advertising.
Privacy Policy