Home Latest Section 230’s Fate Belongs With Congress—Not the US Supreme Court

Section 230’s Fate Belongs With Congress—Not the US Supreme Court

0
Section 230’s Fate Belongs With Congress—Not the US Supreme Court

[ad_1]

In the nearly 27 years since Congress passed Section 230 of the Communications Decency Act, courts have broadly interpreted it to protect online communities for being legally responsible for user content, laying the foundation for the business models of Facebook, Yelp, Glassdoor, Wikipedia, community bulletin boards, and so many other sites that rely on content they don’t create.

Some of those protections are at risk in the next year, as the Supreme Court has agreed to hear its first case interpreting the scope of Section 230’s protections. In Gonzalez v. Google, the plaintiffs ask the court to rule that Section 230 does not immunize platforms when they make “targeted recommendations” of third-party content.

Section 230, written in 1995 and passed in early 1996, unsurprisingly does not explicitly mention algorithmic targeting or personalization. Yet a review of the statute’s history reveals that its proponents and authors intended the law to promote a wide range of technologies to display, filter, and prioritize user content. This means that eliminating Section 230 protections for targeted content or types of personalized technology would require Congress to change the law. 

Like many Section 230 cases, Gonzalez v. Google involves tragic circumstances. The plaintiffs are the family members and estate of Nohemi Gonzalez, a California State University student who, while studying abroad in Paris, was killed in the 2015 ISIS shootings, along with 128 other people. The lawsuit, filed against Google, alleges that its subsidiary YouTube violated the Anti-Terrorism Act by providing substantial assistance to terrorists. At the heart of this dispute is not merely that YouTube hosted ISIS videos, but, as the plaintiffs wrote in legal filings, YouTube’s targeted recommendations of ISIS videos. “Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos,” the plaintiffs wrote. In other words, YouTube allegedly showed ISIS videos to those more likely to be radicalized.

Last year, the US Court of Appeals for the Ninth Circuit had rejected this argument due to Section 230. Yet the Court was not enthusiastic in ruling against the Gonzalez family, with Judge Morgan Christen writing for the majority that despite its ruling: “ we agree the Internet has grown into a sophisticated and powerful global engine the drafters of § 230 could not have foreseen.” And the Court was not unanimous, with Judge Ronald Gould asserting that Section 230 does not immunize Google because its amplification of ISIS videos contributed to the group’s message (Section 230 does not apply if the platform even partly takes part in the development of content). “In short, I do not believe that Section 230 wholly immunizes a social media company’s role as a channel of communication for terrorists in their recruiting campaigns and as an intensifier of the violent and hatred-filled messages they convey,” Gould wrote. After the Ninth Circuit largely ruled against the Gonzalez family, the Supreme Court this year agreed to review the case.

Section 230 was a little-noticed part of a major 1996 overhaul of U.S. telecommunications laws. The House added Section 230 to its telecommunications bill, largely in response to two developments. First, the Senate’s version of the telecommunications bill imposed penalties for the transmission of indecent content. Section 230 was touted as an alternative to the Senate’s censorious approach, and as a compromise, both the House’s Section 230 and the Senate’s anti-indecency provisions ended up in the bill that President Clinton signed into law. (The next year, the Supreme Court would rule the Senate’s portion unconstitutional).

Second, Section 230 tried to solve a problem highlighted in a 1995 ruling in a $200 million defamation lawsuit against Prodigy, brought by a plaintiff who said that he was defamed on a Prodigy bulletin board. A New York trial court judge ruled that because Prodigy had reviewed user messages before posting, used technology that prescreened user content for “offensive language,” and engaged in other moderation, its “editorial control” rendered it a publisher that faced as much liability as the author of the posts. A few years earlier, a New York federal judge had reasoned that because CompuServe did not exert sufficient “editorial control,” it was considered a “distributor” that was liable only if it knew or had reason to know of the allegedly defamatory content.

[adinserter block=”4″]

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here