Section 230 Is the Foundation of the Internet, So Why Do Republicans Want to Change It?

BY admin March 31, 2019 Facebook ، News 8 views

Without Section 230 of the Communications Decency Act, most of your favorite websites would not survive–nor would they have existed at all. The modern internet would be a much different place without this short little bit of legislation that has become a sort of First Amendment of the web. Now, a growing contingent of…

Ted Cruz has been especially critical of Section 230
Photo: Getty

Without Section 230 of the Communications Decency Act, most of your favorite websites would not survive–nor would they have existed at all. The modern internet would be a much different place without this short little bit of legislation that has become a sort of First Amendment of the web. Now, a growing contingent of conservative voices are beginning to call for a reevaluation of 230 and we really need to consider the consequences.

All of the most frequented websites–save for Netflix–became popular, top revenue-generators in a global digital economy because they allow users to share their opinions, photos, memes, GIFs, videos, and yes, even nudes. That’s all thanks to Section 230. Its text states that, with few exceptions, websites and services like Yelp, Reddit, and Facebook cannot be held liable for the content created by their users. In the absence of Section 230, Wikipedia could never handle the liability inherent in having armies of users curate openly editable pages–not when they could be sued out of existence for something a single editor wrote.

It’s curious, then, why Section 230, long understood to be the “most important law protecting internet speech,” is now under almost constant attack by influential conservatives in Congress. Why would anyone want to dismantle the law that, at the very least, transformed the web from a network of millions into a fluid space where more than 3.2 billion instantly share their ideas? Absent a desire to watch the thing burn, there are two possibilities. Neither is flattering.

On the one hand, it may be that some lawmakers are simply ignorant of the actual purpose behind Section 230. (Several experts on the law believe this is the case.) On the other, they could simply be looking to score some cheap political points. It’s more likely a combination of the two. For example, while grilling Mark Zuckerberg in a congressional hearing last year, Senator Ted Cruz told the Facebook co-founder: “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum.” And while this isn’t anywhere close to being accurate, it may be precisely what the Texas lawmaker’s constituents–those who feel their opinions are currently less popular or being drowned out–want to hear.

But the truth is, Section 230 protects all websites, whether they’re politically oriented or not.

Take, for instance, the Daily Wire, a news site run by conservative commentator Ben Shapiro. A quick peek at a few articles will show that its users routinely leave disparaging comments about the subjects of its reporting. But even if a comment did cross the defamatory threshold, under Section 230, neither Shapiro nor the Daily Wire could be held liable. This is by no means, as Cruz put it, a “neutral public forum.” (Neither is this site, for that matter.) Nor is the liberal internet forum Daily Kos. It doesn’t matter: Section 230 does not place additional liability on websites that cater to a particular viewpoint, and the idea that it should is ludicrous.

The notion that website owners should, by default, face a greater risk of being sued and put out of business, simply because of their political affiliations, seems hardly constitutional. In fact, it seems downright dangerous. Yet this is precisely the notion being tossed around as of late by seasoned Washington lawmakers; among them, so-called strict interpreters of the U.S. Constitution.

For another example, look no further than Missouri Sen. Josh Hawley, a Republican of Missouri, who last year tweeted that Congress should investigate the company Twitter because (for reasons that remain unclear) it temporarily suspended the account of a right-wing talk-show host. In a clear reference to Section 230, he wrote: “Twitter is exempt from liability as a ‘publisher’ because it is allegedly ‘a forum for a true diversity of political discourse.'”

That’s not merely a minor misinterpretation of the law; it’s a blatant falsehood. And it’s difficult to understand how he came to that conclusion. (We’ve reached out to Hawley’s office to explain his comments, and we’ll update when we hear back.)

Laws are sometimes overly vague and even labyrinthine in their text. But Section 230 is not one of them. The key provision, what is essentially responsible for allowing all user-generated content to exist, is only 26 words long. And it says this:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

It means that if you, reader, operate an online forum, chatroom, or blog where your visitors can leave comments, post pictures, GIFs, or video, you cannot be sued into oblivion for anything your users post. If that were the case, why would you let any user post anything on your site? Without this immunity, YouTube, were it even to exist at all, would more closely resemble television programming; a service, perhaps, run by a team of people who closely scrutinizing every video, which would only be produced by creators the company had previously vetted.

Unable to predict what outrageous things consumers might say, online reviews, such as those heavily relied upon by Amazon customers, would not be possible. And webcams? Forget about it.

Compuserve, Prodigy, and Untaming the web

The internet didn’t always enjoy this freedom. Back in the early 1990s, on an internet with fewer than 50 million users, there were essentially two types of online services: those that took a hardline approach to free speech and made no effort to police the online forums or bulletin boards they operated, and those that wanted to moderate user content to prevent the spread of obscene material, in the hopes of creating a “family friendly” service. Unfortunately, the court decisions relied upon by judges in the ’90s to determine who should and should not be held liable for illicit content were all relics of a radically different era.

More than a half-century ago, the Supreme Court decided there’s a difference between someone who, say, publishes a book and a person who sells one. The chief difference, it found, was that the latter could not, under most circumstances, be held responsible for what someone else had written.

“There was this odd rule under the First Amendment that had come out of cases from the 1950s and the Supreme Court involving booksellers and whether a bookseller could be liable for obscene material they sold if they didn’t know of the material,” said Jeff Kosseff, a cybersecurity professor at the U.S. Naval Academy. “And what the Supreme Court said was, if you’re a distributor of content, under the First Amendment, you can only be held liable if you knew or should have known of the illegal content.”

It’s unreasonable to assume that a person who runs a bookstore or a library or a newsstand is capable of comprehending what’s contained in every piece of literature they sell. It is, however, a publisher’s job specifically to know that. Part of being a publisher is reviewing, editing, and fact-checking every book that crosses their table. These are two different types of literary distributors, but only one of them is required to intimately know their product.

Unfortunately, this method of gauging liability did not translate well when it came to the internet and the disparity in the outcomes of two notable cases underscored the need for a new law.

In the first case, a court found in 1991 that CompuServe, one of the internet’s first major service providers, could not be held responsible for the libelous content posted by its users. The reason is that, because CompuServe had no policy of reviewing its users’ content, the court found it was more analogous to a bookseller.

Conversely, a court determined four years later that Prodigy, another early provider, could be held responsible. Unlike CompuServe, Prodigy did have a policy of moderating its user content. In that way, it was more akin to a publisher. Simply because it had made an effort to eradicate anything obscene or abusive from its online forums, a gavel was dropped on Prodigy’s head.

Put differently, the civil justice system had created a financial incentive for online businesses to ignore anything illegal or defamatory posted by users. But thankfully, in 1996, there were two lawmakers in Congress who foresaw the potential of the internet to usher in a new era of economic growth, and they quickly moved to amend the law accordingly.

Authored by Sen. Ron Wyden, a Democrat, and Rep. Christopher Cox, a Republican, Section 230 was a simple and elegant solution. It was introduced as an amendment to the Communication Decency Act (CDA), the goal of which, as its title suggests, is to regulate the spread of online pornography. In his new book, The Twenty-Six Words That Created the Internet, Kosseff describes how Section 230, a law that would become pivotal to the internet’s success, was passed to little fanfare:

“The bill’s proposal and passage flew under the radar. Section 230 received virtually no opposition or media coverage, as it was folded into the more controversial Communications Decency Act, which was added to the Telecommunications Act of 1996, a sweeping overhaul of U.S. telecommunications laws. Beltway media and lobbyists focused on the regulation of long-distance carriers, local phone companies, and cable providers, believing that these companies would share the future of communications. What most failed to anticipate was that online platforms–such as websites, social media companies, and apps–would play a far greater role in shaping the future of the Internet than would the cables and wires that physically connected computers.”

“The beauty of Section 230 is that it moots the need to inquire about why the service made the decisions that it made,” said Eric Goldman, a professor at Santa Clara University School of Law. “We’re not in a position to decide why they published something and why they chose not to publish something else. That’s a losing game. Section 230 says let’s not do that, let’s just have a categorical rule: It says third-party content, the online services aren’t liable for it; unless it sits along the statutory exceptions.”

Since the beginning, Goldman says, Section 230’s immunity has always excluded federal criminal prosecutions. “This wasn’t an accident. There never was an absolute immunity from liability for third-party content.” Additionally, the law also does not protect companies from intellectual property infringement. Last year, Congress created another carveout with the passage of the Fight Online Sex Trafficking Act (FOSTA): A controversial amendment to Section 230 touted as a means of combating sex trafficking, FOSTA clarifies that companies can be held responsible for content advertising prostitution (even when posted by consensual sex workers).

What the handful of GOP lawmakers are aiming for today are new exceptions that would, according to experts, cripple the internet. It isn’t hyperbole.

Sen. Hawley, for instance, has repeatedly implied that changes to Section 230 should now be on the table. As Reason reported, he’s called the provision a “sweetheart deal” for Silicon Valley. “Google and Facebook should not be a law unto themselves,” he said, being interviewed this month at the Conservative Political Action Conference (CPAC). “They should not be able to discriminate against conservatives. They should not be able to tell conservatives to sit down and shut up.”

“The critiques of 230 usually are ill-informed,” says Goldman, who openly admits he’s a staunch supporter of the law and has a tough time seeing anywhere it needs changing. Calls to amend the law, he adds, “are often being advanced for political reasons that have nothing to do with the validity of Section 230 or its benefits for the internet.” Dismantling it entirely, he said, would be an unmitigated disaster.

“If we reconfigure Section 230,” he added, “one likely scenario is that the only third-party content that an online publisher would publish would come from professional sources that could have vetting processes for that content. They would have to stand behind that content with an indemnity or insurance.”

On this point, Kosseff agreed.

“I think if you eliminate Section 230 you will see some pretty remarkable changes. You can even see that in what happened after FOSTA was passed,” he said. “I think it was two days after it passed in the Senate, but before it was even signed into law, Craigslist eliminated its personals ads. You might say, ‘Well, that’s not necessarily a huge social problem.’ But imagine if they got rid of all of Section 230, who else would eliminate the ability for people to freely post online?”

The implicit answer is: Everyone who knows what’s good for them.

Moderating to the Extreme

By their own mismanagement, companies like YouTube and Facebook are eliciting increased scrutiny from the media, members of the public, and lawmakers on both sides of the aisle. Extremist and terrorist propaganda are proliferating. Democracy, some argue, hangs in the balance. “You’re given this remarkable legal benefit, and if you’re not going to meet your end of the bargain, that becomes viewed as arrogant,” Kosseff says.

This was never more noticeable than when footage of the Christchurch terrorist attack in New Zealand recently spread like wildfire across multiple platforms. Calls for companies to more quickly eliminate extremist content intensified. So did the criticism of their past efforts. Propaganda is proving exceptionally difficult to tackle because, while the companies can more easily delete content they find originates with a foreign disinformation campaign, the primary sources of disinformation nowadays is domestic.

“Tech companies certainly need to continue to be far more vigorous about identifying, fingerprinting and blocking content and individuals who incite hate and violence,” Sen. Wyden said recently in a statement. “If politicians want to restrict the First Amendment or eliminate the tools with which much of the world communicates in real time, they should understand they are also taking away the tools that bear witness to government brutality, war crimes, corporate lawlessness and incidents of racial bias.”

“Banning such platforms either directly or indirectly,” he added, “will do far more harm and facilitate far more injustice than they would prevent.”

While the major social networks have taken “some steps” to counteract the spread of disinformation, a recent report from the NYU Stern Center for Business and Human Rights concluded that Facebook and, in particular, YouTube continue to employ “a piecemeal approach” to the problem. This, as opposed to any exhaustive measures that would offset or negate the deluge of disinformation from which, incidentally, they also hugely directly profit.

While “reducing the prominence of false content,” by “cobbling together a reliance on legal obligations” through enforcement of what Facebook likes to call “community standards,” researchers found that the companies have mostly failed or avoided outright embracing a straightforward commitment to removing it.

While Twitter CEO Jack Dorsey recently appeared on a podcast run by a notable purveyor of anti-vaccination theories, Facebook was forced to remove ads posted by another prominent so-called “anti-vaxxer” who had spent more than $5,000 to target, among others, women the Washington-area during a measles outbreak. A high school senior who inoculated himself testified before a Senate committee this month that his mother, who opposes vaccinations and believes they are dangerous, was getting all of her information from one source: Facebook.

After it began pulling books pushing “toxic autism cures,” Amazon this month started removing unscientific documentaries about vaccines from its streaming service. But it wasn’t a revelation sparked internally or some internal safeguard; it was in reaction to a damning report by Wired.

Regardless, the companies have made some strides in terms of improving their content moderation policies and making these efforts more transparent, which may help stave off the momentum of calls to alter Section 230. Until just a few years ago, noted Kosseff, most of these companies treated their content-moderation efforts like “some kind of classified NSA program.”

Asked if Section 230 evaporated would companies like Facebook and Wikipedia continue to let users post their own content, Goldman exclaimed: “Why would you! If you’re liable for them, every decision that you make across 2.2 billion users, every single decision you make could be, ‘Well, you chose to publish that link and didn’t publish that one, now we get to sue you.”

“Why would you do that?” Goldman said. “That simply doesn’t work.”

Comments

write your comment.

Your email address will not be published.