Our latest briefing dives into the public launch of the NIST’s long-awaited AI Risk Management Framework, the EEOC’s new plan to tackle AI-based discrimination in recruitment and hiring, and the New York Department of Financial Services’ endeavor to better understand the potential benefits and risks of AI and machine learning in the life insurance industry.
Meta Ireland (Meta) has recently been issued with two fines by the Irish Data Protection Commission (DPC) for breaches of the EU General Data Protection Regulation (GDPR) relating to advertisements run on its Facebook and Instagram services. The decisions highlight some fundamental issues for all data controllers in respect of identifying the appropriate legal basis for their data processing operations and the need to be transparent about how personal data is used. The decisions also reveal some core differences in approach between the DPC, the Irish national privacy regulator in this case, and the European Data Protection Board (EDPB). It signals the likelihood of ongoing wrangling between the various European data regulators as they seek to interpret the decisions and as they are (inevitably) challenged through the courts.
The penalty imposed against Meta Ireland
The substantial fines of €210m (approximately $223m) with respect to Facebook and €180m (approximately $191m) with respect to Instagram reflect the consolidated turnover of the Meta group and the level of fines which, in the EDPB’s view, are required to be effective, proportionate and dissuasive in accordance with Article 83(1) of the GDPR. Meta now has 3 months to take corrective action and amend its privacy policies (including identifying an appropriate legal basis for processing) and its operations to bring its data processing in line with the GDPR.
Prompted by a rapid increase in frequency, sophistication, and scale of data leaks and data breach legislation in recent years, the Federal Communications Commission (FCC) unanimously voted to kick off a proceeding aimed at adopting new proposals to update data breach response obligations involving Customer Proprietary Network Information (CPNI). These proposals aim to ensure timely notification to affected customers, the FCC, and federal law enforcement agencies and require effective measures to mitigate and prevent harm.
CPNI is a subset of personal information with regard to telecommunications carriers’ customers and the FCC has maintained rules about safeguarding the confidentiality of CPNI data for many years. Examples of CPNI are rate plan, minutes used, type of services subscribed to, type of device, location information, call detail records, and other proprietary information about a customer’s telecommunications services accounts.
In October 2022, the U.K. Medicines and Health products Regulatory Agency (MHRA) published its Guidance, Software and AI as a Medical Device Change Programme – Roadmap, setting out how it will regulate software and AI medical devices in the U.K. by balancing patient protection and providing certainty to industry.
Background to the Reforms
The MHRA initially announced the Software as a Medical Device (SaMD) and Artificial Intelligence as a Medical Device (AIaMD) Change Programme in September 2021, designed to ensure that regulatory requirements for software and AI are clear and patients are kept safe. This builds on the broader reform of the medical device regulatory framework detailed in the Government response to consultation on the future regulation of medical devices in the United Kingdom, which recently saw its timetable for implementation extended by 12 months to July 2024.
As of March 2022, there are more than 18,000 different crypto currencies in service for a total market capitalization of $2 trillion. In this episode of the Faegre Drinker on Law and Technology Podcast, host Jason G. Weiss chats with Faegre Drinker partner Jeff Blumberg and associate Jane Blaney, to revisit the world of crypto and not only discuss the foundations of crypto currencies, but also explore the world of blockchain and non-fungible tokens, or NFTs.
The conversation tackles a number of questions, including:
- What is blockchain and what makes it different from traditional recordkeeping processes?
- What exactly is crypto currency? What is important to understand about it?
- What exactly is an NFT? How do they work in easy-to-understand concepts?
- What uses are there for blockchain, both using crypto currencies and in uses other than crypto currency?
- Are crypto currencies considered “legal tender”? Can people use them like normal money?
- How can and will NFTs be used in other ways in our society as these concepts grow?
The National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.
NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.” Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).
The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.