Update: AI Regulation in the U.K. — New Government Approach

Share

In October 2022, the U.K. Medicines and Health products Regulatory Agency (MHRA) published its Guidance, Software and AI as a Medical Device Change Programme – Roadmap, setting out how it will regulate software and AI medical devices in the U.K. by balancing patient protection and providing certainty to industry.

Background to the Reforms

The MHRA initially announced the Software as a Medical Device (SaMD) and Artificial Intelligence as a Medical Device (AIaMD) Change Programme in September 2021, designed to ensure that regulatory requirements for software and AI are clear and patients are kept safe. This builds on the broader reform of the medical device regulatory framework detailed in the Government response to consultation on the future regulation of medical devices in the United Kingdom, which recently saw its timetable for implementation extended by 12 months to July 2024.

Continue reading “Update: AI Regulation in the U.K. — New Government Approach”

What Is Algorithmic Bias? Why Is It Important? – Faegre Drinker on Law and Technology Podcast

Share

Chances are good that your organization uses algorithms or artificial intelligence to help make business decisions — and that regulatory efforts targeting these automated decision-making systems, including their potential to produce unintended bias, have caught your attention. In this episode of the Faegre Drinker on Law and Technology Podcast, host Jason G. Weiss sits down with Bennett Borden, Faegre Drinker’s chief data scientist and co-founder of the firm’s artificial intelligence and algorithmic decision-making (AI-X) team, to discuss algorithmic bias and what companies should know about the latest regulatory developments.

Continue reading “What Is Algorithmic Bias? Why Is It Important? – Faegre Drinker on Law and Technology Podcast”

The U.S. in the AI Era: the National Security Commission on Artificial Intelligence Releases Report Detailing Policy Recommendations

Share

On March 1, 2021, the National Security Commission on Artificial Intelligence (NSCAI) released its 700-page Final Report (the “Report”), which presents NSCAI’s recommendations for “winning the AI era” (The Report can be accessed here). This Report issues an urgent warning to President Biden and Congress: if the United States fails to significantly accelerate its understanding and use of AI technology, it will face unprecedented threats to its national security and economic stability. Specifically, the Report cautions that the United States “is not organizing or investing to win the technology competition against a committed competitor, nor is it prepared to defend against AI-enabled threats and rapidly adopt AI applications for national security purposes.”

In the Final Report, NSCAI makes a number of detailed policy recommendations “to advance the development of AI, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The Report, its findings and recommendations all signal deep concern that the U.S. has underinvested in AI and must play catch-up in order to safeguard its future.

Continue reading “The U.S. in the AI Era: the National Security Commission on Artificial Intelligence Releases Report Detailing Policy Recommendations”

New Executive Order on Maintaining American Leadership in Artificial Intelligence

Share

On February 11, 2019, President Trump signed an Executive Order on “Maintaining American Leadership in Artificial Intelligence.”  The Executive Order (EO) recognizes that the United States is the world leader in AI research and development (R&D) and deployment,” and that “[c]ontinued American leadership in AI is of paramount importance. . . .”

Continue reading “New Executive Order on Maintaining American Leadership in Artificial Intelligence”

The FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool

Share

On November 7, Federal Communications Commission Chairman Ajit Pai issued a Public Notice announcing a first ever FCC Forum focusing on artificial intelligence (AI) and machine learning. This Forum will convene at FCC headquarters on November 30 and will feature experts in AI and machine learning discussing the future of these technologies and their implications for the communications marketplace.

Continue reading “The FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool”

US FDA Approaches to Artificial Intelligence

Share

Artificial Intelligence (AI) can be employed in a health care setting for a variety of tasks, from managing electronic health records at a hospital, to market research at a benefits management organization, to optimizing manufacturing operations at a pharmaceutical company. The level of regulatory scrutiny of such systems depends on their intended use and associated risks.

In the U.S., for medical devices using AI, one of the key regulatory bodies is the Food and Drug Administration (FDA), especially its Center for Devices and Radiological Health (CDRH). CDRH has long followed a risk-based approach in its regulatory policies, and has officially recognized ISO Standard 14971 “Application of Risk Management to Medical Devices.” That standard is over 10 years old now, and therefore is currently undergoing revisions – some of which are meant to address challenges posed by AI and other digital tools that are flooding the medical-devices arena.

Continue reading “US FDA Approaches to Artificial Intelligence”

©2024 Faegre Drinker Biddle & Reath LLP. All Rights Reserved. Attorney Advertising.
Privacy Policy