After more than a decade of uncontrolled experiments by internet platforms on millions of users, there is an emerging possibility that one group of users — kids — may gain some protection. A wave of court cases has an opportunity to fill a void left by the inaction of the executive and legislative branches of the federal government.
In the eight years since Russia used Facebook, Instagram and other platforms to interfere in the U.S. presidential election, Congress has done nothing to protect our democracy from assault by bad actors. It has stood by while platforms do anything that earns them a buck. It has also done nothing to protect Americans from the manipulative practices of surveillance capitalism. The White House has done only slightly more than nothing. Courts continue to side with internet platforms over the people that use them.
It should be no surprise that federal politicians favor Big Tech. Silicon Valley is where the money is. Just as important, voters have not penalized politicians for failing in their duty to protect the public interest. There has been no outcry about politicians whose family members work in Big Tech and staff members whose salaries are paid by owners of Big Tech. Politicians at the state level have passed some tech reform legislation, with California leading the way, but industry lobbying has taken the teeth out of most of the laws.
In court, internet platforms have avoided unfavorable judgments by asserting rights to free speech, as well as the protection of Section 230 of the Communications Decency Act of 1996. While there have historically been limits on 1st Amendment protection for harmful speech, courts have not applied any limit to the speech of internet platforms. Section 230, which was created to enable internet platforms to moderate harmful speech online, has been interpreted by courts as blanket immunity, even in cases of negligence.
Internet platforms should not be allowed to harm children (and adults) with impunity. They should not be allowed to undermine democracy and public health for profit. These notions seem obvious to everyone but those in a position to rectify the situation.
The Wall Street Journal published a report last summer titled, “Instagram Connects Vast Pedophile Network: The Meta unit’s systems for fostering communities have guided users to child-sex content.” Unredacted testimony from a federal court in California revealed that Meta employees warned Mark Zuckerberg that the design of Instagram led to addiction for many teens, only to have Zuckerberg ignore the warnings.
The common element to both stories is the indifference of Meta management to harm. The underlying cause of that indifference is the absence of consumer safety regulations for tech. Consumer safety creates friction that limits growth and profitability, something platforms avoid at all costs. Eight years of trusting platforms to self-regulate has not prevented them from being used to instigate acts of terrorism, unleash a tsunami of public health disinformation in a pandemic or enable an insurrection at the U.S. Capitol.
Fortunately, a new wave of legal cases will give courts an opportunity to change course.
The cases aim to protect children online by challenging the design of internet platforms. Thirty-three state attorneys general — led by California and Colorado — have filed a case in federal court against Meta for designing products to addict children. Nine other state attorneys general filed similar cases in their own state courts.
By focusing on product design, the cases minimize conflict with the 1st Amendment and Section 230. Free speech and the right to moderate speech are protected by the law, whereas product design that leads to harm and the refusal to remediate it should not be. With cases in 10 jurisdictions, the odds of a favorable outcome for the plaintiffs are better than they would be in a single jurisdiction.
In addition, there will be an appeal in federal court related to California’s Age Appropriate Design Code, a law that requires platforms to protect the privacy of minors in an age-appropriate way. Modeled on a successful consumer protection law in Britain, the California measure passed the Legislature unanimously and was signed into law in September 2022. NetChoice, a trade organization funded by Google, Meta, TikTok, Amazon and others, quickly sued to block the law.
A federal district court judge in September granted a preliminary injunction on the basis that the law probably violates the 1st Amendment. The flaw in the court’s reasoning is that law has nothing to do with content or expression. The decision suggests that corporations can use the 1st Amendment to defeat regulations designed to protect the public interest.
California Atty. Gen. Rob Bonta hasfiled an appeal to challenge the injunction, arguing that we “should be able to protect our children as they use the internet. Big businesses have no right to our children’s data: childhood experiences are not for sale.” Bonta should have extended this logic to cover all Californians, but the wisdom of it in the context of children is self-evident.
By coincidence, new whistleblower disclosures have exposed reckless business practices by Meta. In testimony before a Senate committee, whistleblower Arturo Béjar confirmed that Meta’s management was fully aware of the prevalence of misogyny and unwanted sexual advances toward teenagers on Instagram and refused to take action.
Béjar’s testimony builds on that of Frances Haugen, who in 2021 provided documentary evidence that Meta’s management knew Instagram was toxic for teenage girls. Yet even after that disclosure, Meta escaped liability. It remains to be seen whether Béjar’s testimony will produce any legislative action.
The best way to ensure protection for consumers online is for Congress to pass laws that protect Americans from harmful tech products and predatory data practices. But until that happens, the courts may be our children’s only line of defense.
Roger McNamee is a co-founder of Elevation Partners and the author of “Zucked: Waking Up to the Facebook Catastrophe.”