Facebook’s Terms of Service set jurisdiction for litigation – We Are the People, Inc. v. Facebook, Inc.

A common mistake, and arguably a waste of time, is to attempt to bring a breach of contract litigation in a jurisdiction other than the jurisdiction that the contract states. Years ago I wrote an article about the importance of boilerplate terms. One of the very first points I discuss is choice of law/choice of forum clauses.

Most people who are entering into a contract read the contract before they sign their name. Curiously, this doesn’t seem to translate when people are signing up for a website or app. I actually wrote about this too, warning people that they are responsible for their own actions when it comes to website Terms of Service and that they should read them before they sign up. Alas, we’re all human and the only real time people tend to look at the Terms of Service (i.e., the use contract) is when the poo has hit the fan. Even then, the first thing most people look at (or should look at if they are considering litigation) is the choice of law provisions.

In this instance, Plaintiff’s brought a lawsuit against Facebook in the Southern District of New York alleging that Facebook’s removal of content from Facebook’s pages violated Facebook’s “contractual and quasi-contractual obligations to keep Plaintiffs’ content posted indefinitely.” Anyone who has ever used Facebook would likely realize that the “contract” being discussed would stem from their Terms of Service. Facebook filed a motion to dismiss based upon Section 230 of the Communications Decency Act or, alternatively, to transfer venue.

Why would Facebook want to transfer venue? Because arguably California has better law for them. California has a strong anti-SLAPP law codified at Cal. Civ. Proc. § 425.16 (which applies to many cases that Facebook is likely to be named in) and many Section 230 cases have been ruled upon favorably to platforms. As such, Facebook’s Terms of Service contains a forum selection clause that requires any disputes over the contract be heard by a court in California; more specifically, exclusively in the Northern District of California (or a state court located in San Mateo County).

As I see it, these Plaintiffs either didn’t bother to read that part of the Terms of Service or they wanted to roll the dice and see if Facebook wouldn’t notice (Pro-tip: fat chance of that working). Regardless of the rationale, on June 3, 2020 the court quickly sided with Facebook ruling that the Terms of Service forum selection clause was “plainly mandatory” absent some showing that such clause was unenforceable (which Plaintiffs failed to do and, according to the Court, could not do in this particular circumstance (given Defendants’ memorandum of law) and Facebook’s Motion to Transfer was granted.

Citation: We Are the People, Inc. v. Facebook, Inc., Case No. 19-CV-8871 (JMF) (S.D. N.Y. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

It’s hard to find caselaw to support your claims when you have none – Wilson v. Twitter

When the court’s opinion is barely over a page when printed, it’s a good sign that the underlying case had little to no merit.

This was a pro se lawsuit, filed against Twitter, because Twitter suspended at least three of Plaintiff’s accounts which were used to “insult gay, lesbian, bisexual, and transgender people for violating the company’s terms of service, specifically its rule against hateful conduct.”

Plaintiff sued Twitter alleging that “[Twitter] suspended his accounts based on his heterosexual and Christian expressions” in violation of the First Amendment, 42 U.S.C. § 1981, Title II of the Civil Rights Act of 1964, and for alleged “legal abuse.”

The court was quick to deny all of the claims explaining that:

  1. Plaintiff had no First Amendment claim against Twitter because Twitter was not a state actor; having to painfully explain that just because Twitter was a publicly traded company it doesn’t transform Twitter into a state actor.
  2. Plaintiff had no claim under § 1981 because he didn’t allege racial discrimination.
  3. Plaintiff’s Civil Rights claim failed because: (1) under Title II, only injunctive relief is available (not damages like Plaintiff wanted); (2) Section 230 of the Communications Decency Act bars his claim; and (3) because Title II does not prohibit discrimination on the basis of sex or sexual orientation (an no facts were asserted to support this claim).
  4. Plaintiff failed to allege any conduct by Twitter that cold plausibly amount to legal abuse.

The court noted that Plaintiff “expresses his difficulty in finding case law to support his claims.” Well, I guess it would be hard to find caselaw to support claims when you have no valid ones.

Citation: Wilson v. Twitter, Civil Action No. 3:20-0054 (S.D. W.Va. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Section 230, the First Amendment, and You.

Maybe you’ve heard about “Section 230” on the news, or through social media channels, or perhaps by reading a little about it through an article written by a major publication … but unfortunately, that doesn’t mean that the information that you have received is necessarily accurate. I cannot count how many times over the last year I’ve seen what seems to be purposeful misstatements of the law … which then gets repeated over and over again – perhaps to fit some sort of political agenda. After all, each side of the isle so to speak is attacking the law, but curiously for different reasons. While I absolutely despise lumping people into categories, political or otherwise, the best way I can describe the ongoing debate is that the liberals believe that there is not enough censoring going on, and the conservatives think there is too much censorship going on. Meanwhile, you have the platforms hanging out in the middle often struggling to do more, with less…

In this article I will try to explain why I believe it is important that even lay people understand Section 230 and dispel some of the most common myths that continually spread throughout the Internet as gospel … even from our own Congressional representatives.

WHY LAY PEOPLE SHOULD CARE ABOUT SECTION 230

Not everyone who reads this will remember what it was like before the Internet. If you’re not, ask your elders what it was like to be “talked at” by your local television news station or news paper. There was no real open dialog absent face to face or over the telephone communications. Your audience was limited who you would get to share information with. Even if you wrote a “letter to the Editor” at a local newspaper it didn’t mean that your “opinion” was necessarily going to be posted. If you’re old end enough to remember that, and are nodding your head in agreement … I encourage you to spend some time remembering what that was like.

If you like being able to share information freely, and to comment on information freely, you absolutely should care about what many refer to as “Section 230.” So many of my friends, family and colleagues say “I don’t understand Section 230 and I don’t care to … that’s your space” yet these are the people that I see posting content online about their business via LinkedIn or other social media platforms, sharing reviews of businesses they have been to, looking up information on Wikimedia, sharing their general opinion and/or otherwise dialog and debate over topics that are important to them, etc. In a large way, whether you know it or not, Section 230 has powered your ability to interact online in this way and has drastically shaped the Internet as we know it today.

IN GENERAL: SECTION 230 EXPLAINED

The Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA”), in brief, is a federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Generally speaking, if the platform didn’t actually create the content, they traditionally aren’t liable for it. Indeed, there are a few exceptions, but for now, we’ll keep this simple. Platforms that allow third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, TripAdvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites.

If you’re looking for some sort of a deep dive on the history of the law, I encourage you to pick up a copy of Jeff Kosseff’s book titled The Twenty-Six Words That Created The Internet.

ONGOING “TECHLASH” WITH SECTION 230 IN THE CROSS-HAIRS

One would be entirely naive to even suggest that the Internet is perfect. If you ask me, it’s far from perfect. I readily concede that indeed there are harms that happen online. To be fair, harms happen offline too and they always have. Sometimes humans just suck. I’ve discussed a lot of this in my ongoing blog article series Fighting Fair on the Internet. What has been interesting to me is that many seem to want to blame people’s bad behavior on technology and to try and hold technology companies liable for what bad people do using their technology.

I look at technology as a tool. By analogy, a hammer is a tool yet we don’t hold the hammer manufacturing company or the store that sold the hammer to the consumer liable when a bad guy goes and beats someone to death with it. I imagine the counter-argument is that technology is in the best position to help stop the harms. Perhaps that may be true to a degree (and I believe many platforms do try to assist by moderating content and otherwise setting certain rules for their sites) but the question becomes, should they actually be liable? If you’re a Section 230 “purist” the answer is “No.” Why? Because Section 230 immunizes platforms from liability for what other people say or do on their platforms.

The government, however, seems to have its’ own set of ideas. We already saw an amendment to Section 230 with FOSTA (the anti-sex trafficking amendment). Unfortunately, good intentions often make for bad law, and, in my opinion, FOSTA was one of those laws which has been arguably proven to cause more harm than good. I could explain why, but I’ll save that discussion for another time.

Then, in February of this year, the DOJ had a “workshop” on Section 230. I was fortunate enough to be in the audience in Washington, D.C. where it was held and recently wrote an article breaking down that “workshop.” If you’re interested in all the juicy details, feel free to read that article but in summary it basically was four hours’ worth of : humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.

Shortly thereafter the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 or EARN IT Act of 2019-2020 Bill was dropped which is designed to prevent the online sexual exploitation of children. While this sounds noble (FOSTA did too) when you unpack it all, and look at the bigger picture, it’s more government attempts to mess with free speech and online privacy/security in the form of yet another amendment to Section 230 under the guise of being “for the children.” I have lots of thoughts on this, but I will save this for another article another day too.

This brings us to the most recent attack on Section 230. The last two (2) weeks have been a “fun” time for those of us who care about Section 230 and its application. Remember how I mentioned above that some conservatives are of the opinion that there is too much censorship online? This often refers to the notion that social media platforms (Facebook, Twitter, and even Google) censor or otherwise block conservative speech. Setting aside whether this actually happens or not (I’ve heard arguments pointing both directions on this issue) President Trump shined a big old light on this notion recently.

Let me first state off by saying that there is a ton of misinformation that is shared online. It doesn’t help that many people in society will quickly share things without actually reading it or conducting research to see if the content they are sharing has any validity to it but will spend 15 minutes taking a data mining quiz only to find out what kind of a potato they are. Who knew the 2006 movie Idiocracy was going to be so prophetic?

Along with other perceived harmful content, platforms have been struggling with how to handle such misinformation. Some have considered adding more speech by way of notifications or “labels” as Twitter calls them, to advise their users that the information may be wholly made up or modified, shared in a deceptive manner, likely to impact public safety or otherwise cause serious harm. Best I could tell, at least as far as Twitter goes, this seems to be a relatively new effort. Side note: While ideal in a perfect world, I’m not personally a fan of social media platforms fact checking because: 1) it’s very hard to be an arbiter of truth; 2) it’s incredibly hard to do it at scale; 3) once you start, people will expect you to do it on every bit of content that goes out – and that’s virtually impossible; and 4) if you fail to fact check something that turns out to be false or otherwise misleading, one might assume that such content is accurate because they come to rely on the fact checking.

So what kicked off the latest “Section 230 tirade”? Twitter “fact checked” President Trump in two different tweets on May 26th, 2020 by adding in a “label” to the bottom of the Tweets (which you have to click on to actually see – they don’t transfer when you embed them as I’ve done here) that said “Get the facts about mail-in-ballots.” This clearly suggests that Twitter was in disagreement with information that the President Tweeted and likely wanted its users to be aware of alternative views.

To me, that doesn’t seem that bad. I can see some validity to President Trump’s concern. I can also see an alternative argument, especially since I typically mail in my voting ballot. If you think about it, pretty much everything that comes out of a politician’s mouth is subjective. Nevertheless, President Trump got upset over the situation and then suggested that Twitter was “completely stifling FREE SPEECH” and then made veiled threats about not allowing that to happen.

If we know anything about this President, it is that when he’s annoyed with something, he will take some sort of action. President Trump ultimately ended up signing an Executive Order on “Preventing Online Censorship” a mere two (2) days later. For those that are interested, while certainly left leaning, and non-favorable to our commander in chief, Santa Clara Law Professor Eric Goldman provided a great legal analysis of the Executive Order, calling it “political theater.” Even if you align yourself with the “conservative” base, I would encourage you to set aside the Professor’s personal opinions (we all have opinions) and focus on the meat of the legal argument. It’s good.

Of course, and as expected, the Internet looses its mind and all the legal scholars and practitioners come out of the woodwork, commenting on Section 230 and the newly signed Executive Order, myself included. The day after of the Executive Order was signed (and likely President Trump read all the criticisms) he Tweeted out “REVOKE 230!”

So this is where I have to sigh heavily. Indeed there is irony in the fact that the President is calling for the revocation of the very same law that allowed innovation and Twitter to even become a “thing” and which also makes it possible for him to reach out and connect to millions of people, in real time, in a pretty much unfiltered way as we’ve seen, for free because he has the application loaded on his smart phone. In my opinion, but for Section 230, it is entirely possible Twitter, Facebook and all the other forms of social media and interactive user sites would not exist today; at least not as we know it. Additionally, I find it ironic that President Trump is making free speech arguments when he’s commenting about, and on, a private platform.

As I said though, this attack on Section 230 isn’t just stemming from the conservative side. Even Joe Biden has suggested that Section 230 should be “repealed immediately” but he’s on the whole social media companies censor too little train which is completely opposite of the reasons that people like President Trump wants it revoked.

HOW VERY AMERICAN OF US

How many times have you heard that American’s are self centered jerks? Well, Americans do love their Constitutional rights, especially when it comes to falling in love with their own opinions and the freedom to share those opinions. Moreover, when it comes to the whole content moderation and First Amendment debate, we often look at tech giants as purely American companies. True, these companies did develop here (arguably in part thanks to Section 230) however, what many people fail to consider is that many of these platforms operate globally. As such, they are often trying to balance the rules and regulations of the U.S. with the rules and regulations of competing global interests.

As stated, Americans are very proud of the rights granted to them, including the First Amendment right to free speech (although after reading some opinions lately I’m beginning to wonder if half the population slept through or otherwise skipped high school civics class … or worse, slept through Constitutional Law while in law school). However, not all societies have this speech right. In fact, Europe’s laws value the privacy as a right, over the freedom of expression. A prime example of this playing out is Europe’s Right to Be Forgotten law.

When we demand that these tech giants cater to us, here in the United States, we are forgetting that these companies have other rules and regulations that they have to take into consideration when trying to set and implement standards for their users. What is good for us here in the U.S. may not be good for the rest of the world, which are also their customers.

SECTION 230 AND FIRST AMENDMENT MYTHS SPREAD LIKE WILDFIRE

What has been most frustrating to me, as someone who practices law in this area and has a lot of knowledge when it comes to the business of operating platforms, content moderation, and the applicability of Section 230, is how many people who should know better get it wrong. I’m talking about our President, Congressional representatives, and media outlets … so many of them, getting it wrong. And what happens from there? You get other people who regurgitate the same uneducated or otherwise purposefully misstatements in articles that get shared which further perpetuates the ignorance of the law and how things actually work.

UPDATED: For example, just today Jeff Kosseff Tweeted out a thread that describes a history of the New York Times failing to accurately explain Section 230 in various articles and how one of these articles ended up being quoted by a NJ federal judge. It’s a good thread. You should read it.

MYTH: A SITE IS EITHER A “PLATFORM” OR A “PUBLISHER”

Contrary to so many people I’ve listened to speak, or articles that I’ve read, when it comes to online UGC platforms, there is no distinction between “publisher” and a “platform.”  You aren’t comparing the New York Times to Twitter.  Working for a newspaper is not like working for a UGC platform.  Those are entirely different business models … apples and oranges. Unfortunately, that’s another spot where many people, like this author, get caught up and confused. 

UGC platforms are not in the business of creating content themselves but rather in the business of setting their own rules and allowing third-parties (i.e., you and I here on this platform) to post content in accordance with those rules.  Even for those who point to some publications erring on the side of caution on 2006-2008 re editing UGC comments doesn’t mean that’s how the law actually was interpreted.  We have decades worth of jurisprudence interpreting Section 230 (which is what the judicial branch does – interprets the law, not the FCC which is an independent organization overseen by Congress).  Platforms absolutely have the right to moderate the content which they did not create and kick people off of their platform for violation of their rules. 

Think if it this way – have you ever heard your parents say (or maybe you’ve said this to your own kids) “My house, my rules.  If you don’t like the rules, get your own house.”  If anyone actually researches the history, that’s why Section 230 was created … to remove the moderator’s dilemma.  A platform’s choice of what to allow, or disallow, has no bearing (for the sake of this argument here) on the applicability of Section 230.  Arguably, UGC platforms also have a First Amendment right to choose what they want to publish, or not publish.

MYTH: PLATFORMS HAVE TO BE NEUTRAL FOR SECTION 230 TO APPLY

Contrary to the misinformation being spewed all over (including by government representatives – which I find disappointing) Section 230 has never had a “neutrality” caveat for protection.  Moreover, in the context of the issue of political speech, Senator Ron Wyden, who was a co-author for the law even stated recently on Twitter “let me make this clear: there is nothing in the law about political neutrality.” 

You can’t get much closer to understanding Congressional intent of the law than getting words directly from the co-author of the law. 

Quite frankly, there is no such thing as a “neutral platform.”  If there were, as someone that deals with content escalations for platforms, I can tell you that we would have a very UGLY Internet because sometimes people just suck.

MYTH: CENSORSHIP OF SPEECH BY A PLATFORM VIOLATES THE FIRST AMENDMENT

The First Amendment absolutely protects the freedom of speech.  In theory, you are free to put on a sandwich board that says (insert whatever you take issue with) and walk up and down the street if you want.  In fact, we’re seeing such constitutionally protected demonstrations currently with the protesters all over the country in connection to the death of George Floyd. Such peaceful demonstration is absolutely protected under the First Amendment. 

What the First Amendment does not do (and this seems to get lost on people for some reason) is give one the right to amplification of that speech on a private platform.  One might wish that were the case, but wishful thinking does equal law. Unless and until there is some law, that passes judicial scrutiny, which deems these private platforms a public square subject to the same restrictions that is imposed on the government, they absolutely do not have to let you say everything and anything you want. Chances are, this is also explained in their Terms of Service, which you probably didn’t read, but you should.

If you’re going to listen to anyone provide an opinion on Section 230, perhaps one would want to listen to a co-author of the law itself:

Think of it this way, if you are a bar owner and you have a drunk and disorderly guy in you bar that is clearly annoying your other customers, would you want the ability to 86 the person or do you want the government to tell you that as long as you are open to the public you have to let that person stay in your bar even if you risk losing other customers because someone is being obnoxious? Of course you want to be able to bounce that person out! It’s not really any different for platform operators.

LET’S KEEP THE CONVERSATION GOING BUT NOT MAKE RASH DECISIONS

Do platforms have the best of both worlds … perhaps.  But what is worse?  The way it is now with Section 230 or what it would be like without Section 230?  Frankly, I choose a world with Section 230.  Without Section 230, the Internet as we know it will change. 

While we’ve never seen what the Internet looks like without Section 230 I can imagine we would go to one of two options: 1) an Internet where platforms are afraid to moderate content and therefore everything and anything would go up, leaving us with a very ugly Internet (because people are unfathomably rude and disgusting – I mean, content moderators have suffered from PTSD for having to look at what nasty humans try to share); or 2) an Internet where platforms are afraid of liability and either UGC sites will cease to exist altogether or they may go to a notice and take down model where as soon a someone sees something they are offended by or otherwise don’t like, they will tell the platform the information is false, defamatory, harassing, etc. and that content would likely automatically come down. The Internet, and public discussion, will be at the whim of a heckler’s veto. You think speech is curtailed now? Just wait until the society of “everyone is offended” gets a hold of it.

As I mentioned to begin with, I don’t think that the Internet is perfect, but neither are humans and neither is life. While I believe there may be some concessions to be had, after in-depth studies and research (after all, we’ve only got some 24 years of data to work with and those first years really don’t count in my book) I think it foolish to be making rash decisions based upon political agendas. If the politicians want their own platform where they aren’t going to be “censored” and the people have ease of access to such information … create one! That’s what is great about this country … we have the ability to innovate … well, at least for now.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Breaking down the DOJ Section 230 Workshop: Stuck in the Middle With You

The current debate over Section 230 of the Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA”) has many feeling a bit like the lyrics from Stealers Wheel – Stuck in The Middle With You, especially the lines where it says “clowns to the left of me, jokers to my right, here I am stuck in the middle with you.” As polarizing as the two extremes of the political spectrum seem to be these days, so are the arguments about Section 230.  Arguably the troubling debate is compounded by politicians who either don’t understand the law, or purposefully make misstatements about the law in attempt to further their own political agenda.

For those who may not be familiar with the Communications Decency Act, in brief, it is federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Platforms that allow third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, TripAdvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. 

So, what’s the debate over?  Essentially the difficult realities about humans and technology.  I doubt there would be argument over the statement that the Internet has come a long way since the early days of CompuServe, Prodigy and AOL. I also believe that there would be little argument that humans are flawed.  Greed was prevalent and atrocities were happening long before the advent of the Internet.  Similarly, technology isn’t perfect either.  If technology were perfect from the start, we wouldn’t ever need updates … version 1.0 would be perfect, all the time, every time.  That isn’t the world that we live in though … and that’s the root of the rub, so to speak.

Since the enactment of the CDA, an abundance of lawsuits have been initiated against platforms, the results of which further defined the breadth of the law.  For those really wanting to learn more and obtain a more historical perspective on how the CDA came to be, one could read Jeff Kosseff’s book called The Twenty Six Words That Created the Internet.  To help better understand some of the current debate over this law which will be discussed shortly, this may be a good opportunity to point out a few of the (generally speaking) practical implications of Section 230:

  1. Unless a platform wholly creates or materially contributes to content on its platform, it will not be held liable for the content created by a third-party.  This immunity from liability has also been extended to other tort theories of liability where it is ultimately found that such theory stems from the third-party content.
  2. The act of filtering content by a platform does not suddenly transform it into a “publisher” aka the person that created the content in the first place, for the purposes of imposing liability.
  3. A platform will not be liable for their decision to keep content up, or take content down, regardless of whether such information may be perceived as harmful (such as content alleged to be defamatory). 
  4. Injunctive relief (such as a take down order from a court) is legally ineffective against a platform if such order relates to content that they would have immunity for.

These four general principals are the result of litigation that ensued against platforms over the past 23+ years. However, a few fairly recent high-profile cases stemming from atrocities, and our current administration (from the President down), has put Section 230 in the crosshairs and desires for another amendment.  The question is, amendment for what?  One side says platforms censor too much, the other side says platforms censor too little, platforms and technology companies are being pressured to  implement stronger data privacy and security for their users worldwide while the U.S. government is complaining about measures being taken are too strong and therefore allegedly hindering their investigations.  Meanwhile the majority of the platforms are singing “stuck in the middle with you” trying to do the best they can for their users with the resources they have, which unless you’re “big Internet or big tech” is typically pretty limited.  And frankly, the Mark Zuckerberg’s of the world don’t speak for all platforms because not all platforms are like Facebook nor do they have the kind of resources that Facebook has.  When it comes to implementation of new rules and regulations, resources matter.

On January 19, 2020 the United States Department of Justice announced that they would be hosting a “Workshop on Section 230 of the Communications Decency Act” on February 19, 2020 in Washington, DC.  The title of the workshop “Section 230 – Nurturing Innovation or Fostering Unaccountability?”  The stated purpose of the event was to “[D]iscuss Section 230 … its expansive interpretation by the courts, its impact on the American people and business community, and whether improvements to the law should be made.”  The title of the workshop was intriguing because it seemed to suggest that the answer was one or the other when the two concepts are not mutually exclusive.

On February 11, 2020 the formal agenda for the workshop (the link to which has since been removed from the government’s website) was released.  The agenda outlined three separate discussion panels:

  • Panel 1:  Litigating Section 230 which was to discuss the history, evolution and current application of Section 230 in private litigation;
  • Panel 2: Addressing Illicit Activity Online which was to discuss whether Section 230 encourages or discourages platforms to address online harms, such as child exploitation, revenge porn, and terrorism, and its impact on law enforcement; and
  • Panel 3: Imagining the Alternative which was to discuss the implications on competition, investment, and speech of Section 230 and proposed changes. 

The panelists were made up of legal scholars, trade associations and a few outside counsel who represent plaintiffs or defendants.  More specifically, the panels were filled with many of the often empaneled Section 230 folks including legal scholars like Eric Goldman, Jeff Kosseff; Kate Klonik, Mary Ann Franks, and staunch anti- Section 230 attorney Carrie Goldberg, a victim’s rights attorney that specializes in sexual privacy violations.  Added to the mix was also Patrick Carome who is famous for his Section 230 litigation work, defending many major platforms and organizations like Twitter, Facebook, Google, Craigslist, AirBnB, Yahoo! and the Internet Association.  Other speakers included Annie McAdams, Benjamin Zipupsky, Doug Peterson, Matt Schruers, Yiota Souras, David Chavern, Neil Chilson, Pam Dixon, and Julie Samuels.

A review of the individual panelist’s bios would likely signal that the government didn’t want to include the actual stakeholders, i.e., representation from any platform’s in-house counsel or in-house policy.  While not discounting the value of the speakers scheduled to be on panel, one may find it odd that those who deal with the matters every day, who represent entities that would be the most impacted by modifications to Section 230, who would be in the best position to determine what is or is not feasible to implement in the terms of changes, if changes to Section 230 were to happen, had no seat at the discussion table.  This observation was wide spread … much discussion on social media about the lack of representation of the true “stakeholders” took place with many opining that it wasn’t likely to be a fair and balanced debate and that this was nothing more than an attempt by U.S. Attorney General William Barr to gather support for the bill relating to punishing platforms/tech companies for implementing end-to-end encryption.  One could opine that the Bill really has less to do with Section 230 and more to do with the Government wanting access to data that platforms may have on a few perpetrators who happen to be using a platform/tech service.

If you aren’t clear on what is being referenced above, it bears mentioning that there is a Bill titled “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019” aka “EARN IT Act of 2019” that was proposed by Senator Lindsey Graham.  This bill came approximately two weeks after Apple was ordered by AG Barr to unlock and decrypt the Pensacola shooter’s iPhone.  When Apple responded that they couldn’t comply with the request, the government was not happy.  An article written by CATO Institute stated that “During a Senate Judiciary hearing on encryption in December Graham issued a warning to Facebook and Apple: ‘this time next year, if we haven’t found a way that you can live with, we will impose our will on you.’”  Given this information, and the agenda topics, the timing of the Section 230 workshop seemed a bit more than coincidence.  In fact, according to an article in Minnesota Lawyer, Professor Eric Goldman pointed out that the “DOJ is in a weird position to be convening a roundtable on a topic that isn’t in their wheelhouse.”

As odd as the whole thing may have seemed, I had the privilege of attending the Section 230 “Workshop”.  I say “workshop” because it was a straight lecture without the opportunity for there to be any meaningful Q&A dialog from the audience.  Speaking of the audience, of the people I had direct contact with, the audience consisted of reporters, internet/tech/first amendment attorneys, in-house counsel/representatives from platforms, industry association representatives, individual business representatives, and law students.  The conversations that I personally had, and personally overheard, was suggestive that the UGC platform industry (the real stakeholders) were all concerned or otherwise curious about what the government was trying to do to the law that shields platforms from liability for UGC.

PANEL OVERVIEW:

After sitting through nearly four hours’ worth of lecture, and even though I felt the discussion to be a bit more well-rounded than I anticipated, I still feel that the entire workshop could be summarized as follows: “humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.”

Perhaps that is a bit of an oversimplification but honestly, if you watch the whole lecture, that’s what it boils down to.

The harms discussed during the different panels included:

  • Libel (brief mention)
  • Sex trafficking (Backpage.com, FOSTA, etc.)
  • Sexual exploitation of children (CSAM)
  • Revenge porn aka Non-Consensual Pornography aka Technology Facilitated Harassment
  • Sale of drugs online (brief mention)
  • Sale of alleged harmful products (brief mention)
  • Product liability theory as applied to platforms (ala Herrik v. Grindr)

PANEL #1:

In traditional fashion, the pro-Section 230 advocates explained the history of the CDA, how it is important to all platforms that allow UGC, not just “big tech” and resonated on the social utility of the Internet … platforms large and small.  However, the anti-Section 230 panelists pointed to mainly harms caused by platforms (though not elaborated on which ones) by not removing sexually related content (though defamation was a short mention in the beginning). 

Ms. Adams seemed to focus on sex trafficking – touching on how once Backpage.com was shut down that a similar close site started up in Amsterdam. She referred to the issues she was speaking about as a “public health crisis.” Of course, Ms. Goldberg raised argument relating to the prominent Herrik v Grindr case wherein she argued a product liability theory as a work around Section 230. That case ended when writ was denied by the U.S. Supreme Court in October of 2019. I’ve heard Ms. Goldberg speak on this case a few times and one thing she continually harps on is the fact that the Grindr didn’t have way to keep Mr. Herrik’s ex from using their website. She seems surprised by this. As someone who represents platforms, it makes perfect sense to me. We must not forget that people can create multiple user profiles, from multiple devices, from multiple IP addresses, around the world. Sorry, Plaintiff attorneys…the platforms’ crystal ball is in the shop on these issues … at least for now. Don’t misunderstand me. I believe Ms. Goldberg is fighting the good fight, and her struggle on behalf of her clients is real! I admire her work and no doubt she sees it with a lens from the trenches she is in. That said, we can’t lose sight of reality of how things actually work versus how we’d like them to work.

PANEL #2:

There was a clear plea from Ms. Franks and Ms. Souras for something to be done about sexual images, including those exploiting children.  I am 100% in agreement that while 46 states have enacted anti “revenge porn” or better termed Non-Consensual Pornography laws, such laws aren’t strong enough because of the malicious intent requirement.  All a perpetrator has to say is “I didn’t mean to harm victim, I did it for entertainment” or another seemingly benign purpose and poof – case closed.”  That struggle is difficult! 

No reasonable person thinks these kinds of things are okay yet there seemed to be an argument that platforms don’t do enough to police and report such content.  The question becomes why is that?  Lack of funding and resources would be my guess…either on the side of the platform OR, quite frankly, on a under-funded/under-resourced government or agency to actually appropriately handle what is reported.  What would be the sense of reporting unless you knew for sure that content was actionable for one, and that the agency it is being reported to would actually do anything about it?

Interestingly, Ms. Souras made the comment that after FOSTA no other sites (like Backpage.com) rose up.  Curiously, that directly contradicted Ms. Adams’s statement about the Amsterdam website popping up after Backpage.com was shut down.  So which is it?  Pro-FOSTA statements also directly contradicts what I’ve heard last October at a workshop put on by ASU’s Project Humanities entitled “Ethics and Intersectionality of the Sext Trade” which covered the complexities of sex trafficking and sex work.  Problems with FOSTA was raised during that workshop.  Quite frankly, I see all flowery statements about FOSTA as nothing more than trying to put lipstick on a pig; trying to make a well-intentioned, emotionally driven, law look like it is working when it isn’t.

Outside of the comments by Ms. Franks and Ms. Souras, AG Doug Peterson out of Nebraska did admit that the industry may self-regulate and sometimes that happens quickly, but he still complained that the state criminal law preemption makes his job more difficult and advocated for an amendment to include state and territory criminal law to the list of exemptions.  While that may sound moderate, the two can be different and arguably such amendment would be overbroad when you are only talking about sexual images.  Further, the inclusion of Mr. Peterson almost seemed as a plug in for a subtle push about how the government allegedly can’t do their job without modification to Section 230 – and I think a part of the was leaning towards, while not making a big mention about it, was the end-to-end encryption debate.  In rebuttal to this notion, Matt Schruers suggested that Section 230 doesn’t need to be amended but that the government needs more resources so they can do a better job with the existing laws, and encouraged tech to work to do better as they can – suggesting efforts from both sides would be helpful

One last important point made during this panel was Kate Klonik making the distinction between the big companies and other sites that are hosting non-consensual pornography.  It is important to keep in mind that different platforms have different economic incentives and that platforms are driven by economics.  I agree with Ms. Klonik that we are in a massive “norm setting” period where we are trying to figure out what to do with things and that we can’t look to tech to fix bad humans (although it can help).  Sometimes to have good things, we have to accept a little bad as the trade-off.

PANEL #3

This last panel was mostly a re-cap of the benefits of Section 230; the struggles that we fact when trying to regulate with a one-size fits all mentality and, I think most of the panelists seem to be agreeing that there needs to be some research done before we go making changes because we don’t want unintended consequences.  That is something I’ve been saying for a while and reiterated during the ABA’s Forum on Communications Law Digital Communications Committee hosted a free CLE titled “Summer School: Content Moderation 101” wherein Jeff Kosseff and I, in a moderated panel by Elisa D’Amico, Partner at K&L Gates, discussed Section 230 and a platform’s struggle with content moderation.  Out of this whole panel, the one speaker that had most people grumbling in the audience was David Chavern who is the President of News Media Alliance.  When speaking about solutions, Mr. Chavern likened Internet platforms to that of traditional media as if he was comparing two oranges and opined that platforms should be liable just like newspapers.  Perhaps he doesn’t understand the difference between first party content and third-party content.  The distinction between the two is huge and therefore I found his commentary to be the least relevant and helpful to the discussion. 

SUMMARY:

In summary, there seem to be a few emotion evoking ills in society (non-consensual pornography, exploitation of children, sex trafficking, physical attacks on victims, fraud, and the drug/opioid crisis) that the government is trying to find methods to solve.  That said, I don’t think amending Section 230 is the way to address that unless and until there is reliable and unbiased data that would suggest that the cure won’t be worse than the disease. Are the ills being discussed really prevalent, or do we just think they are because they are being pushed out through information channels on a 24-hour news/information cycle?

Indeed, reasonable minds would agree that we, as a society, should try and stop harms where we can, but we also have to stop regulating based upon emotions.  We saw that with FOSTA and arguably, it has made things more difficult on law enforcement, victims alike and has had unintended consequences, including chilling speech, on others.  You simply cannot regulate the hate out of the hearts and minds of humans and you cannot expect technology to solve such a problem either.  Nevertheless, that seems to be the position of many of the critics of Section 230.

For more reading and additional perspectives on the DOJ Section 230 Workshop, check out these additional links:

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.