One month after the February 22, 2024, announcement of enforcement actions against data brokers X-Mode and InMarket Media, the Federal Trade Commission (FTC) announced a complaint and proposed consent order requiring software security company Avast Limited and two subsidiaries, Avast s.r.o. and Jumpshot, Inc. (collectively, Avast), to pay $16.5 million to resolve allegations that they unfairly and deceptively sold granular, reidentifiable web browsing data for advertising purposes. The FTC’s action against Avast reflects its continued focus on the mass collection and sale of sensitive personal data for advertising purposes.

Avast Complaint

In its complaint, the FTC alleges that Avast marketed its products, including browser extensions and antivirus software, as tools to protect consumer privacy, such as by blocking third parties from tracking online activity through cookies. The FTC alleges that Avast (via its Jumpshot subsidiary) collected more than eight petabytes of consumer browsing data, such as search queries and the URLs of webpages visited by consumers, via browser extensions and antivirus software marketed as privacy-protective. The FTC alleges that Avast indefinitely retained these browsing records, typically tied to a persistent identifier, in granular form. The FTC further alleges that Avast sold these detailed data feeds to a variety of clients—including advertising, marketing, and data analytics companies and data brokers.

The FTC claims such actions were deceptive. According to the FTC, after advertising to consumers both that its products would protect their privacy by preventing third parties from tracking their online activity and that it would only ever share their browsing data in aggregate and anonymous form, Avast turned around and did the exact opposite. The FTC’s complaint alleges that Avast sold granular data that in some cases purchasers were not only free to re-associate with individuals, but in some cases such re-association was the very point of the purchase.

The FTC also alleges that Avast’s collection, retention, and sale of the granular browsing data was unfair. According to the FTC, this data processing was done without adequate notice and consumer consent. More specifically, the FTC alleges that in many instances, Avast’s privacy disclosures either did not state that consumers’ browsing data would be shared with third parties for advertising purposes or indicated that such data would only be shared in aggregate and anonymous form. Notably, the FTC also characterizes “re-identifiable browsing data” as “sensitive,” and alleges that the browsing data collected by the Avast products, such as web searches and websites, reveal consumers’ religious beliefs, health concerns, political leanings, location, financial status, visits to child-directed content, and interest in prurient content. According to the FTC, Avast’s practice of linking browsing information to device and other identifiers, as well as coarse location data, over time, increased the likelihood that a consumer could be reidentified, which was likely to cause substantial consumer injury.

Consent Order

The proposed consent order generated headlines with the requirement that Avast must pay $16.5 million, The FTC commissioners touted this as “the highest monetary remedy in a de novo privacy violation case” brought by the FTC to date, that is, the highest monetary remedy for a privacy violation under Section 5(a) of the FTC Act. The FTC has said that it intends to use this money to provide redress to affected consumers.

The order bans Avast from selling, licensing, or otherwise disclosing web browsing data from Avast products to third parties for advertising purposes. It also requires Avast to obtain affirmative express consent before selling, licensing, or otherwise disclosing web browsing data from non-Avast products to third parties for such purposes.

Similar to the X-Mode and InMarket orders, the order mandates that Avast not only delete the web browsing data that it collected through Jumpshot, but also delete or destroy any models, algorithms, or software developed based on that data. Avast also must instruct any third party that received such data to delete the data and any models or algorithms derived from them or software developed to analyze the data.

In addition, the order subjects Avast to typical FTC privacy order provisions, such as prohibitions on certain privacy-related misrepresentations and the requirement to implement a mandated privacy program with biennial third-party assessments for 20 years.

Takeaways

The FTC’s enforcement actions against X-Mode, InMarket, and now Avast signal the agency’s continued focus on data brokers and others in the business of aggregating and selling large volumes of what the FTC views as sensitive data for advertising purposes. In a recent blog post, the FTC reinforced common themes across the Avast, X-Mode, and InMarket actions, such as the following:

  • First, in a line that has attracted significant attention, the FTC asserts that “Browsing and location data are sensitive. Full stop.” While the FTC has long asserted that precise location data is sensitive, it remains to be seen whether its characterization of web browsing data as “sensitive” marks a sustained shift in the FTC’s thinking, or if this is a reflection of the specific facts of Avast’s alleged practices. In any event, the Avast case makes clear that even data lacking “traditional standalone elements of personally identifiable information” can reveal sensitive information about consumers. And if the risk of such disclosure is likely to cause substantial injury to consumers, it may be unfair.
  • Second, the FTC expects companies to be clear about how consumers’ personal data will be used, shared, and retained. Without clear notice, “[p]eople have no way to object to—let alone control,” how their data is handled.
  • Third, the purposes for which data is processed should align with the purposes for which it was collected.
  • Fourth, the FTC expressed skepticism about contractual restrictions on data reidentification or misuse where, for example, such restrictions contain loopholes or are not audited or enforced against downstream recipients of data.

Last month, Senators Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) reintroduced the Kids Online Safety Act (KOSA), initially introduced last term, noting that the bill now has 62 cosponsors, bipartisan support, and is poised to pass in the Senate.

KOSA would apply to online platforms (including social media services and virtual reality environments), online video games, messaging applications, and video streaming services that are used, or are reasonably likely to be used, by an individual under 17 years of age, subject to enumerated exceptions.

Below we discuss some of KOSA’s key requirements, including notable changes in the most recent version of the bill, as well as in the incorporated Filter Bubble Transparency Act.

Continue Reading Kids Online Safety Act Gains Momentum in the Senate

Enacted in 1998, Illinois’ Genetic Information Privacy Act (GIPA) governs the confidentiality and use of genetic testing and genetic information by employers and insurers. The statute was designed to prevent employers and insurers from using genetic testing and information as a means of discrimination. To that end, GIPA prohibits employers and their agents from directly or indirectly soliciting, requesting, requiring, or purchasing genetic testing and genetic information from a person as a condition of employment or from using such information in a discriminatory manner against an employee or applicant. The statute similarly prohibits insurers from seeking information derived from genetic testing for use in connection with a “policy of accident and health insurance.”

Read the full Update here.

The Federal Trade Commission issued a supplemental notice of proposed rulemaking on February 15, 2024, in which it recommended a trade regulation rule that would (1) impose liability on businesses who provide goods or services (including artificial intelligence technology) with knowledge or reason to know they will be used to engage in unlawful impersonation of individuals, government, or businesses; and (2) prohibit impersonation of individuals.

Read the full Update here.

On Friday, February 9, as the country collectively packed up and prepared to head home for Super Bowl weekend, the Third Appellate District of the California Appellate Court issued an Order granting the California Privacy Protection Agency the ability to immediately enforce regulations implementing the California Privacy Rights Act, which were finalized in March 2023. This vacated a July 2023 court decision staying enforcement of the regulations until March 29, 2024.

The order paves the way for enforcement after finalization of the proposed regulations governing cybersecurity audits, risk assessments, and automated decision making technology.

Read the full Update here.

Artificial Intelligence-generated robocalls may trick some consumers into thinking they are being called by a human being, but the Federal Communications Commission clarified in a recent AI Declaratory Ruling that it will not be fooled. Moving forward, all AI-generated robocalls will be treated as artificial or prerecorded voice calls for purposes of the Telephone Consumer Protection Act and will require a called party’s prior express consent.

Read the full Update here.

Building off of the momentum from last year’s torrent of new comprehensive state privacy laws, 2024 has begun with a bang as two more states have now entered the picture. On January 16, 2024, New Jersey became the latest state to enact comprehensive privacy legislation with the New Jersey Data Privacy Act (NJDPA). New Hampshire’s state legislature quickly followed suit by passing Senate Bill 255 and it is currently awaiting finalization before becoming law.

Continue Reading Two New States Enter the Privacy Fray

On February 1, 2024, the Federal Trade Commission announced a complaint and proposed consent order against Blackbaud, Inc. concerning a 2020 data security incident that included a ransomware demand and payment. According to the FTC’s complaint, Blackbaud’s allegedly unfair and misleading conduct included not just deficient data security practices but also a delay in providing accurate notice to its business customers about the breach, including the inclusion of deceptive statements about the scope and severity of the breach in its initial notice to those customers.

The FTC highlighted that this case is the first time it has brought standalone Section 5 unfairness claims arising out of the alleged failure to (1) implement and enforce reasonable data retention practices and (2) accurately communicate the severity and scope of the breach.

Read the full Update here.

Safety risk assessments are becoming a preferred regulatory tool around the world. Online safety laws in Australia, Ireland, the United Kingdom, and the United States will require a range of providers to evaluate the safety and user-generated content risks associated with their online services.

While the specific assessment requirements vary across jurisdictions, the common thread is that providers will need to establish routine processes to determine, document, and mitigate safety risks resulting from user-generated content and product design. This Update offers practical steps for providers looking to develop a consolidated assessment process that can be easily adapted to meet the needs of laws around the world.

Read the full Update here.

California Attorney General Rob Bonta announced an investigatory sweep into popular streaming apps and devices, timed to coincide with Data Privacy Day on January 28. The California Attorney General’s Office explained that it is sending letters to such streaming services alleging a failure to comply with the requirement to offer an easy mechanism to opt out of the sale or sharing of personal information under the California Consumer Privacy Act (CCPA).

Continue Reading California Announces Sweep on Streaming Services and More Enforcement To Come