Posted by - enderworld -
on - March 14, 2023 -
Filed in - Impacts -
internet Court Court case Change -
198 Views - 0 Comments - 0 Likes - 0 Reviews
The Story
In 1995, Prodigy, an internet company of the pre-Google and Facebook era, was sued for $200 million. The complaint by Stratton Oakmont, an investment firm, was simple — Prodigy accepted user comments. It even moderated it and decided which comments would be published and which would be discarded. So, if Prodigy chose to publish a defamatory and scandalous comment by a user that wasn’t backed by facts, then that’s Prodigy’s fault.
And a court in New York agreed. They said, “Look, if you’re moderating content from your users, you’re an editor and a publisher. Just like a newspaper. And if a newspaper can be sued, so can you. If you don’t moderate anything, then that’s fine. You won’t be sued.”
So the folks in power in the US decided that this wasn’t great for the future of the internet.
Why’s that, you ask?
Well, the ruling meant that online platforms would simply choose not to moderate content anymore. And the internet could become a cesspool of hate and toxicity. Platforms would need to exercise their discretion and not worry about being sued. That’s the only way the internet could work in the future.
So they decided to tweak the law and introduce Section 230. With just 26 words in 1996, the internet changed forever.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Simply put, if you post something nasty about your local politician, internet platforms like Facebook or Twitter can’t be responsible for it. Only you can be sued because you wrote the content. Platforms were off the hook. They could go about their business as usual and not worry about being sued for people’s comments on their website. All they have to say is, “Remember Section 230!”
The internet platforms of today — Twitter; LinkedIn; Heck, even a restaurant review website — exist solely because of this law. Everyone’s benefited from Section 230. You, me, the platforms.
So, the question is — should these algorithms be protected under Section 230?
And it’s not just algorithms we have to worry about these days. We have AI-powered search engines too — Google Bard and Microsoft Bing 2.0.
See, regular search engines simply post links to content from other sources. But ‘conversational’ search engines are a different breed. They might take information from across the web. And they then summarize it in a nicely palatable way. In the search engine’s own words.
Much like how at Finshots we do our research using secondary sources and write out a story for you. But we typically give you the links to these sources. We point you to the place where we got the information from.
Now an AI search engine may not do that. It might simply write out its own summary for your query. In its own words. And that may include false information. It could have defamatory language and content.
So should they be protected under Section 230?
Anyway, we don’t have the answers to any of this yet. We’ll have to wait until June to see what the US court thinks. But what we can say is that the 26 words that created the internet of today won’t be enough for the internet of the future.