Our latest briefing explores the recent FTC commercial surveillance and data security forum (including discussion on widespread use of AI and algorithms in advertising), California’s inquiry into potentially discriminatory health care algorithms, and the recent California Department of Insurance workshop that could shape future rulemaking regarding the industry’s use of artificial intelligence, machine learning and algorithms.
The Federal Trade Commission (FTC), on a split party vote on August 11, approved an Advanced Notice of Proposed Rulemaking (the Notice) that focuses on potential new rules and requirements that could apply to entities engaged in targeted advertising or other forms of personal information gathering and sharing. Once this Notice is published in the Federal Register, the public will have 60 days to comment on the merits of the proposed new rules. There is also a public forum on the Notice slated to take place on September 8. The FTC’s action comes on the heels of legislative attempts to codify federal privacy protections that have yet to come to fruition.
On November 9, 2020, the United States Federal Trade Commission (FTC) announced that it had entered into a consent agreement, subject to final approval, with videoconferencing company Zoom Video Communications, Inc. (Zoom). The consent agreement settles allegations that Zoom engaged in a series of deceptive and unfair practices that undermined the security of its users. The Commission voted 3–2 to accept the settlement, with Commissioners Chopra and Slaughter voting no and issuing dissenting statements asserting that the FTC’s action did not go far enough.
While the FTC generally does not identify what triggers a law enforcement action, there have been many news articles and a number of class actions filed in connection with Zoom’s data-security practices over the past six months that likely led to this action.
The Federal Trade Commission’s Opinion finding that Cambridge Analytica engaged in deceptive practices to harvest personal information closes another chapter in the Commission’s actions against Cambridge Analytica and its former chief executive and app developer. The opinion is noteworthy for two reasons. First, the procedural posture of this matter is unique because Cambridge Analytica failed to appear or to answer the complaint. This allowed the Commission under its Rules of Practice to find the facts to be as alleged in the complaint and to enter a final decision. Second, the Commission’s opinion holds that a false express privacy claim is material and thus violates Section 5 of the FTC Act.
In 2017, the FTC filed a complaint against D-Link Systems, Inc. (D-Link) alleging that the Taiwan-based computer networking equipment manufacturer had taken inadequate security measures which left its wireless routers and Internet-connected cameras vulnerable to hackers. In early July, D-Link agreed to a settlement that includes a requirement that it implement a comprehensive software security program, and obtain biennial, independent third-party assessments of its software security program for 10 years.
Two of the Federal Trade Commission’s (FTC’s) most recent data security settlements include new requirements that go beyond previous data security settlements. The new provisions (1) require that a senior corporate officer provide to the FTC annual certifications of compliance and (2) specifically prohibit making misrepresentations to the third parties conducting required assessments. A statement accompanying these settlements noted that the FTC has instructed staff to examine whether its privacy and data security orders could be strengthened and improved.