Our latest briefing explores the recent FTC commercial surveillance and data security forum (including discussion on widespread use of AI and algorithms in advertising), California’s inquiry into potentially discriminatory health care algorithms, and the recent California Department of Insurance workshop that could shape future rulemaking regarding the industry’s use of artificial intelligence, machine learning and algorithms.
The National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.
NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.” Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).
The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.
On March 1, 2021, the National Security Commission on Artificial Intelligence (NSCAI) released its 700-page Final Report (the “Report”), which presents NSCAI’s recommendations for “winning the AI era” (The Report can be accessed here). This Report issues an urgent warning to President Biden and Congress: if the United States fails to significantly accelerate its understanding and use of AI technology, it will face unprecedented threats to its national security and economic stability. Specifically, the Report cautions that the United States “is not organizing or investing to win the technology competition against a committed competitor, nor is it prepared to defend against AI-enabled threats and rapidly adopt AI applications for national security purposes.”
In the Final Report, NSCAI makes a number of detailed policy recommendations “to advance the development of AI, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The Report, its findings and recommendations all signal deep concern that the U.S. has underinvested in AI and must play catch-up in order to safeguard its future.
On February 11, 2019, President Trump signed an Executive Order on “Maintaining American Leadership in Artificial Intelligence.” The Executive Order (EO) recognizes that the United States is the world leader in AI research and development (R&D) and deployment,” and that “[c]ontinued American leadership in AI is of paramount importance. . . .”
On November 7, the FCC—in a relatively terse Public Notice—announced that it would hold a Forum at its headquarters on November 30 designed to focus on artificial intelligence (AI) and machine learning by having experts in AI and machine learning discuss the future of these technologies and their implications for the communications marketplace.
On November 7, Federal Communications Commission Chairman Ajit Pai issued a Public Notice announcing a first ever FCC Forum focusing on artificial intelligence (AI) and machine learning. This Forum will convene at FCC headquarters on November 30 and will feature experts in AI and machine learning discussing the future of these technologies and their implications for the communications marketplace.