California wants to coerce platforms into hosting less offensive speech with algorithmic liability law
First Amendment and Section 230 shocked to learn they no longer exist
This week, FIRE wrote to California Governor Gavin Newsom, urging him to veto SB 771, a bill that would allow users and government enforcers to sue large social media platforms for enormous sums if their algorithms relay user-generated content that contributes to violation of certain civil rights laws. Set aside the obvious question of how common it is that social media posts really violate civil rights laws. Oddly enough, that seems to be kind of beside the point.
Obviously, platforms are going to have a difficult time knowing if any given post might later be alleged to have violated a civil rights law. So to avoid the risk of huge penalties, they will simply suppress any content (and user) that is hateful or controversial — even when it is fully protected by the First Amendment.
And that’s exactly what the California legislature wants. In its bill analysis, the Senate Judiciary Committee chair’s staff made clear that the goal was not just to target unlawful speech, but to make platforms wary of hosting “hate speech” more generally:
This cause of action is intended to impose meaningful consequences on social media platforms that continue to push hate speech . . . to provide a meaningful incentive for social media platforms to pay more attention to hate speech . . . and to be more diligent about not serving such content.
Supporters have tried to evade SB 771’s First Amendment and Section 230 concerns, largely by obfuscating what the bill actually does. To hear them tell it, SB 771 doesn’t create any new liability, it just holds social media companies responsible if their algorithms aid and abet a violation of civil rights law, which is already illegal.
But if you look just a little bit closer, that explanation doesn’t quite hold up. To understand why, it’s important to clarify what “aiding and abetting” liability is. Fortunately, the Supreme Court explained this just recently — and in a case also about social media algorithms to boot.
In Twitter v. Taamneh, the plaintiffs claimed that social media platforms had aided and abetted acts of terrorism by algorithmically arranging, promoting, and connecting users to ISIS content, and by failing to prevent ISIS from using their services after being made aware of the unlawful use.
The Supreme Court ruled that they had not successfully made out a claim. Because aiding and abetting requires not just awareness of the wrongful goals, but also a “conscious intent to participate in, and actively further, the specific wrongful act.” All the social media platforms had done was create a communications infrastructure, which treated ISIS content just like any other content — and that is not enough.
California law also requires knowledge, intent, and active assistance to be liable for aiding. But nobody really thinks the platforms have designed their algorithms to facilitate civil rights violations.
So SB 771 has a problem. Under the existing standard, it’s never going to do anything, which is obviously not what its supporters intend. Therefore, they hope to create a new form of liability — recklessly aiding and abetting — for when platforms know there’s a serious risk of harm and choose to ignore it.
This is expansive and troubling in its own right. The universe of hateful content is, to use a somewhat ill-fitting word, diverse. Trying to write algorithms that can catch all the ways in which people phrase awful things, especially when that phrasing is often intended to evade such efforts, is a fruitless endeavor. And then there’s the problem that a platform generally has no idea what is going on between any two people outside the limited window their platform provides. Content that looks entirely innocent could be malicious in a way not apparent to outsiders.
But wait, there’s more.
Lest you imagine that the requirement that a platform know there’s a risk of harm prevents such broad and unpredictable applications, SB 771 also says that, by law, platforms are considered to have actual knowledge of how their algorithms interact with every user, including why every single piece of content will or will not be shown to them.
And that's just another way of saying that every platform knows there’s a chance users will be exposed to harmful content by virtue of using algorithms to relay content. All that’s left is for users to show that a platform consciously ignored that risk.
That will be trivially easy. Here’s the argument: the platform knew of the risk and still deployed the algorithm instead of trying to make it “safer.”
Soon, social media platforms will be liable solely for using an “unsafe” algorithm, even if they were entirely unaware of the offending content, let alone have any reason to think it’s unlawful.
But the First Amendment requires that any liability for distributing speech must require the distributor to have knowledge of the expression’s nature and character. Otherwise, nobody would be able to distribute expression they haven’t inspected, which would “tend to restrict the public’s access to [expression] the State could not constitutionally suppress directly.” Unfortunately for California, the very goal they want SB 771 to accomplish is what makes it unconstitutional.
And this liability is even more expansive than it appears: as the bill is written, it is not even restricted to content recommendation algorithms (though it would still be unconstitutional if it were).
SB 771 doesn’t define “algorithm” beyond the function of “relay[ing] content to users.” But every piece of content on social media, whether in a chronological or recommendation-based feed, is displayed to users using an algorithm. So SB 771 will impose liability every time any piece of content is shown on social media to any user.
This is where Section 230 also has something to say. One of the most consequential laws governing the internet, Section 230 states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” and prohibits states from imposing any liability inconsistent with it. In other words, the creator of the unlawful content is responsible for it, not the service they used to do so. Section 230 has been critical to the internet’s speech-enabling character. Without it, hosting the speech of others at any meaningful scale would be far too risky.
SB 771 tries to make an end-run around Section 230 by providing that “deploying an algorithm that relays content to users may be considered to be an act of the platform independent from the message of the content relayed.” In other words, California is trying to redefine the liability: “we’re not treating you as the publisher of that speech, we’re just holding you liable for what your algorithm does.”
But there can be no liability without the underlying content relayed by the algorithm. By itself, the algorithm does not cause any harm recognized by law. It’s the user-generated content that causes the ostensible civil rights violation. Trying to separate them from each other by legislative fiat is logically incoherent. It reminds of how Texas tried to justify its content moderation law as regulating “censorship, not speech.”
You can declare that things are really other things all you want. But that doesn’t change reality or federal law.
On that note, because all social media content is relayed by algorithm, it would effectively nullify Section 230 by imposing liability on all content. California cannot evade federal law by waving a magic wand and declaring the thing Section 230 protects to be something else.
Newsom has until October 13 to make a decision. If signed, the law takes effect on Jan. 1, 2027, and in the interim, other states will likely follow suit. The result will be a less free Internet, and less free speech — until the courts inevitably strike down SB 771 after costly, wasteful litigation. Newsom must not let it come to that. The best time to avoid violating the First Amendment is now.
The second best time is also now.
Check out FIRE's veto request, which explains these issues in depth here.