SCOTUS declines to rule on Section 230, again. – Gonzalez v. Google

The widely industry watched nail biter of a case, Gonzalez v. Google, has been ruled upon by the Supreme Court of the United States. Many advocates of Section 230 thought for sure that SCOTUS would ruin the application of Section 230 as we know it, however, that didn’t happen. Much to the dismay of many critics of Section 230, SCOTUS (and rightfully so under the facts of this case in my opinion) kicked the can on the issue of Section 230 and declined to address the question.

CASE SUMMARY:

In this case, the parents and brothers of Nohemi Gonzalez, a U.S. citizen killed in the 2015 coordinated terrorist attacks in Paris, sued Google, LLC under 18 U.S.C. §§2333(a) and (d)(2). They alleged that Google was directly and secondarily liable for the attack that killed Gonzalez. The secondary-liability claims were based on the assertion that Google aided and abetted and conspired with ISIS through the use of YouTube, which Google owns and operates.

The District Court dismissed the complaint for failure to state a claim but allowed the plaintiffs to amend their complaint. However, the plaintiffs chose to appeal without amending the complaint. The Ninth Circuit affirmed the dismissal of most claims, citing Section 230 of the Communications Decency Act, but allowed the claims related to Google’s approval of ISIS videos for advertisements and revenue sharing through YouTube to proceed.

The Supreme Court granted certiorari to review the Ninth Circuit’s application of Section 230. However, since the plaintiffs did not challenge the rulings on their revenue-sharing claims, and in light of the Supreme Court’s decision in Twitter, Inc. v. Taamneh, the Court found that the complaint failed to state a viable claim for relief. The Court acknowledged that the complaint appeared to fail under the standards set by Twitter and the Ninth Circuit’s unchallenged holdings. Therefore, the Court vacated the judgment and remanded the case to the Ninth Circuit for reconsideration in light of the Supreme Court’s decision in Twitter. [Author Note: If you listen to the oral argument, you’d see just how weak of a case was brought by Plaintiff].

In summary, the Supreme Court did not address the viability of the plaintiffs’ claims but indicated that the complaint seemed to fail to state a plausible claim for relief, and therefore, declined to address the application of Section 230 in this case. The case was remanded to the Ninth Circuit for further consideration.

DISCLAIMER & OTHER POINTS:

I’m currently sitting at the Tenth Annual Conference on Governance of Emerging Technology and Science. There is a lot of talk about AI, including ChatGPT. Because the Gonzalez opinion was so incredibly short by comparison, I thought I would test out ChatGPT’s ability to summarize this case. Having followed this case, and read the SCOTUS opinion myself, I was quite surprised with summary that it spit out, which is what you just read above. For those that want to read the case opinion for yourself (it’s only three pages) you can review the SCOTUS opinion linked to below. I’ve also included the link to the Twitter case as well (which is a more typical 38 page opinion). In case you are curious, I also asked ChatGPT to summarize the Twitter case, however, there is some sort of character limit as I received an error message about the request being too long. We’re all learning.

Citation: Gonzalez v. Google, 598 U.S. ___ (May 18, 2023)

Citation: Twitter v. Taamneh, 598 U.S. ___ (May 18, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

Advertisement

Section 230 Protects Users Too – Monge v. Univ. of Pa.

One of the rarely discussed points, in the grand scheme of Section 230 chatter anyway, is the fact that Section 230 not only protects various interactive internet platforms, but it also protects users, just like you and me, on the various Internet platforms from third-party content. For example, if you’re an administrator/moderator of some random Facebook group … generally speaking, Section 230 protects you from legally actionable content that other users post in that group. Just like the interactive Internet platforms, you, as a user of the platform, also get some protections. This is also true, as this case will underscore, if you share an article via email that is alleged to be defamatory. Given the ease and frequency that people like to share information that they don’t necessarily read let alone fact check for themselves, you’d think this would be front and center in more discussions when trying to teach people that Section 230 isn’t all about protecting “big tech”.

To be clear, the fact that Section 230 protects users too isn’t just something determined by the courts through case law, but is something actually spelled out right in the language of the statute itself.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

47 U.S.C. § 230(c)(1) (emphasis of bold and italics added)

The below information is based upon the information provided in the court opinion. I have no independent knowledge about the facts of this case.

Plaintiff: Janet Monge

Defendant: University of Pennsylvania, Deborah Thomas, et al.

HIGH LEVEL OVERVIEW

A University of Pennsylvania faculty member, Dr. Deborah Thomas, shared an article that allegedly defamed Dr. Monge via a list serve to an organization that Dr. Monge is a member of. Obviously upset about the situation Dr. Monge filed a lawsuit for claims of defamation, defamation by implication, false light, and civil aiding and abetting against defendants, including Dr. Thomas. Dr. Thomas filed a Fed. R. Civ. P., Rule 12(b)(6) motion to dismiss for failure to state a claim for which relief can be granted arguing that 47 U.S.C. § 230(c)(1) immunizes her from liability. The court agreed with Dr. Thomas and dismissed the action, with prejudice, i.e., dismissed those claims permanently.

THE LEGAL WEEDS

It is almost funny to say “legal weeds” here because this was an easy Section 230 win. The Court stated “[c]ourts analyzing and applying the CDA have consistently held that distributing, sharing, and forwarding content created and/or developed by a third party is conduct immunized by the CDA” and then goes on to cite six cases supporting this position relating to content that was shared in an internet chat room, via email, and other technologies. The Court similarly dismissed Dr. Monge’s “material contribution” argument, suggesting that Dr. Thomas materially contributed to the alleged defamatory statements by including her own commentary in the email forwarding the articles. The Court’s rationale was that Dr. Thomas “did not add anything new to the articles, or materially modify them, when she shared them via email, so she did not materially contribute to the alleged defamation. The Court again cited to multiple cases supporting this point. Based upon these points, the Court rightfully concluded “Dr. Thomas’s conduct of sharing the allegedly defamatory articles via email is immune from liability under the CDA.”

SUMMARY OF THOUGHTS

As mentioned before, this was an easy Section 230 win. Ironically, this is also one of those instances where you see a Plaintiff upset about the content of something end up making the matter a bigger deal by filing a lawsuit where now you have legal academics talking about the situation which is a prime example of the Streisand effect. I can understand in general why Plaintiffs want to set the record straight when they believe false information has been put out about them. On the other hand, if you’re going to take it to court, it is important to realize that such action often shines a lot of light on an issue that you might rather have just kept to a smaller audience.

Citation: Monge v. Univ. of Pa., Case No. 22-2942 (E.D. Pa. March 10, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

Newsweek’s 12(b)(6) failed in defamation case – Boone v. Newsweek

I have never understood the point of news publications including images in their articles/publications that aren’t actually story or incident related. This is especially true when they are reporting critically about specific individuals. I get it, images help with clicks and drawing attention but that’s what stock photos or massively cropped photos are for. Nevertheless, according to the statements in the Court’s opinion, Newsweek made a decision to do just that which resulted in, without surprise, a defamation and false light case against them.

The below information is based upon the information provided in the court opinion. I have no independent knowledge about the facts of this case.

Plaintiff: James Boone

Defendant: Newsweek, LLC, et al. (related entities)

HIGH LEVEL OVERVIEW

Newsweek, an online news organization, published a story about a a police officer who was accused of racially profiling a man in a restaurant. Rather than posting an image of the officer that was being accused of the profiling, Newsweek chose to embed a photo of a police officer, who was apparently identifiable by partial face, nametag and badge number, that had nothing to do with the headline and story. Allegedly, as a consequence, Boone and his family received texts, emails and messages via social media inquiring about the article under the impression that he was involved resulting in Boone having to seek police protection. Boone’s lawyer wrote to Newsweek alerting them to the issue and asked that “appropriate measures [be taken] to mitigate the harm.” For whatever reason, Newsweek apparently didn’t respond. Consequently, Boone filed a lawsuit against Newsweek for defamation and false light in the United States District Court, Eastern District of Pennsylvania.

Newsweek then filed a motion to dismiss, under Fed.R.Civ.P., Rule 12(b)(6) [failure to state a claim] arguing that Boone failed to plead enough facts to support a reasonable inference that Newsweek acted with “actual malice.”

A LITTLE INTO THE LEGAL WEEDS

In this particular instance, Boone is considered a public-figure. “To prevail on a defamation case, the First Amendment requires that public-figure plaintiffs plead-and later, prove, that the defendant acted with ‘actual malice.'” Contrary to most lay persons belief, and as the court explains “‘[a]ctual malice’ is a term of art that does not connotate ill will or improper motivation” rather it means that the “publisher acted ‘with knowledge that [the allegedly defamatory statement] was a false or with reckless disregard of whether it was false or not.” When breaking it down further, the term “reckless disregard” means “that the defendant in fact entertained serious doubts as to the truth of the statement or that the defendant had a subjective awareness of probable falsity.”

The “actual malice” standard is a pretty high bar to recovery. It can be even higher when you’re considering, as here, not a direct false statement but when there is alleged defamation by implication. In this instance “[Boone] has to show that [Newsweek] “either intended to communicate the defamatory meaning or knew of the defamatory meaning and was reckless in regard to it.” This inquiry is subjective in nature which requires that “some evidence showing, directly or circumstantially, that the defendants themselves understood the potential defamatory meaning of their statement.” Obviously, false implications are capable of being defamatory.

Here Boone would need to prove that “Newsweek knew the implication that Boone was involved in [the subject incident] or was reckless about that falsity” and that “Newsweek either intended to convey the false impression that Boone was involved in [the subject incident], or knew that publishing the photograph would likely convey the false impression that Boone was involved in [the subject incident] but recklessly published it anyway.” While the Court discusses some other things, the Court here took issue with the fact that Boone’s badge number and name tag were visible in the photograph. The Court stated:

“The fact that the photograph depicted Boone’s nametag and badge number therefore gives rise to a reasonable inference that Newsweek (1) knew that Boone was not involved in the [subject] incident or acted in reckless disregard of that fact, and (2) knew that publishing the photograph would likely convey the false impression that Boone was involved in the [subject] incident but recklessly published it anyway.”

Because there was a “reasonable inference that Newsweek acted with actual malice” the Court denied the motion to dismiss the defamation claim. With respect to the false light claim, which also requires the finding of actual malice, the Court similarly denied the motion to dismiss.

FINAL THOUGHTS

Defamation litigation can be part of the “cost of doing business” when you are in the news publication business. That said, this was, in my opinion, easily avoidable. I’m not sure if there was a failure to have full legal review done before the publication was published, or if someone didn’t take the demand letter from Boone’s attorney seriously … but Newsweek, with the limited information we’re presented with anyway, appears to have had two opportunities to avoid this litigation and they didn’t take advantage of either. The first would have been to train reporting and editing staff to not use unrelated images, especially of identifiable people, in news reporting efforts. This should be a no-brainer, but given how often I see that happen in news publications, I’m not surprised. The second would have been to acknowledge the mistake and simply change out the picture to something more appropriate … you know, like an image of the actual officer being accused in the article … when they received notice that there was an issue. Doubling down on something like this seems like an unnecessary risk … that has now resulted in costly litigation. Maybe Newsweek has a huge litigation budget … but even then, you’d think that they’d want to use it a little more wisely.

Citation: Boone v. Newsweek, LLC, Case No. 22-1601, E.D. Pa, Feb. 27, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

NY District Court Swings a Bat at “The Hateful Conduct Law” – Volokh v. James

This February14th (2023), Valentine’s Day, the NY Federal District Court showed no love for New York’s Hateful Conduct Law when it granted a preliminary injunction to halt it. So this is, to me, an exceptionally fun case because it includes not only the First Amendment (to the United States Constitution) but also Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’m also intrigued because renowned Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc. are the Plaintiffs. If Professor Volokh is involved, it’s likely to be an interesting argument. The information about the case below has been pulled from the Court Opinion and various linked websites.

Plaintiffs: Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc.

Defendant: Letitia James, in her official capacity as New York Attorney General

Case No.: 22-cv-10195 (ALC)

The Honorable Andrew L. Carter, Jr. started the opinion with the following powerful quote:

 “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’”

Matal v. Tam, 137 S.Ct. 1744, 1764 (2017) 

Before we get into what happened, it’s worth taking a moment to explain who the Plaintiffs in the case are. Eugene Volokh (“Volokh”) is a renowned First Amendment law professor at UCLA. In addition, Volokh is the co-owner and operator of the popular legal blog known as the Volokh Conspiracy. Rumble, operates a website similar to YouTube which allows third-party independent creators to upload and share video content. Rumble sets itself apart from other similar platforms because it has a “free speech purpose” and it’s “mission [is] ‘to protect a free and open internet’ and to ‘create technologies that are immune to cancel culture.” Locals Technology, Inc. (“Locals”) is a subsidiary of Rumble and also operates a website that allows third party-content to be shared among paid, and unpaid, subscribers. Similar to Rumble, Locals also reports having a “pro-fee speech purpose” and a “mission of being ‘committed to fostering a community that is safe, respectful, and dedicated to the free exchange of ideas.” Suffice it to say, the Plaintiffs are no stranger to the First Amendment or Section 230. So how did these parties become Plaintiffs? New York tried to pass a well intentioned, but arguably unconstitutional, law that could very well negatively impact them.

On May 14th last year, 2022, some random racist nut job used Twitch (a social media site) to livestream himself carrying out a mass shooting on shoppers at a grocery store in Buffalo, New York. This disgusting act of violence left 10 people dead and three people wounded. As with most atrocities, and with what I call the “train wreck effect”, this video went viral on various other social media platforms. In response to the atrocity New York’s Governor Kathy Hochul kicked the matter over to the Attorney General’s Office for investigation with an apparent instruction to focus on “the specific online platforms that were used to broadcast and amplify the acts and intentions of the mass shooting” and directed the Attorney General’s Office to “investigate various online platforms for ‘civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” Apparently the Governor hasn’t heard about Section 230, but I’ll get to that in a minute. After investigation, the Attorney General’s Office released a report, and later a press release, that stated “[o]nline platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms” because an alleged “lack of oversight, transparency, and accountability of these platforms allows hateful and extremist views to proliferate online.” This is where one, having any knowledge about this area of law, should insert the facepalm emoji. If you aren’t familiar with this area of law, this will help explain (a little – we’re trying to keep this from being a dissertation).

Now no reasonable person will disagree that this event was tragic and disgusting. Humans are weird beings and for whatever reason (though I suspect a deep dive into psychology would provide some insight), we cannot look away from a train wreck. We’re drawn to it like a moth to a flame. Just look at any news organization and what is shared. You can’t tell me that’s not filled with “train wreck” information. Don Henley said it best in his lyrics in the 1982 song Dirty Laundry, talking about the news: “she can tell you about the plane crash with a gleam in her eye” … “it’s interesting when people die, give us dirty laundry”. A Google search for the song lyrics will give you full context if you’re not a Don Henley fan … but even 40 plus years later, this is still a truth.

In effort to combat the perceived harms from the atrocity that went viral, New York, on December 3, 2022 enacted The Hateful Conduct Law, entitled “Social media networks; hateful conduct prohibited.” What in the world does that mean? Well, the law applies to “social medial networks” and defined “hateful conduct” as: “[T]he use of a social media network to vilify, humiliate, incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). Okay, but still ..

In explaining The Hateful Conduct Law, and as the Court’s opinion (with citations omitted) explains:

[T]he Hateful Conduct Law requires that social media networks create a complaint mechanism for three types of “conduct”: (1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence. This “conduct” falls within the law’s definition if it is aimed at an individual or group based on their “race”, “color”, “religion”, “ethnicity”, “national origin”, “disability”, “sex”, “sexual” orientation”, “gender identity” or “gender expression”.

The Hateful Conduct Law has two main requirements: (1) a mechanism for social media users to file complaints about instances of “hateful conduct” and (2) disclosure of the social media network’s policy for how it will respond to any such complaints. First, the law requires a social media network to “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” This mechanism must “be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website. . . .” and must “allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.” N.Y. Gen. Bus. Law § 394-ccc(2).

Second, a social media network must “have a clear and concise policy readily available and accessible on their website and application. . . ” N.Y. Gen. Bus. Law § 394-ccc(3). This policy must “include how such social media network will respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3).

The law also empowers the Attorney General to investigate violations of the law and provides for civil penalties for social media networks which “knowingly fail to comply” with the requirements. N.Y. Gen. Bus. Law § 394-ccc(5).

Naturally this raised a lot of questions. How far reaching is this law? Who and what counts as a “social media network”? What persons or entities would be impacted? Who decides what is “hateful conduct”? Does the government have the authority to try and regulate speech in this way?

Two days before the law was to go into effect, on December 1, 2022, the instant action was commenced by the Plaintiffs alleging both facially, and as-applied, challenges to The Hateful Conduct Law. Plaintiffs argued that the law “violates the First Amendment because it: (1) is a content viewpoint-based regulation of speech; (2) is overbroad; and (3) is void for vagueness. Plaintiffs also alleged that the law is preempted by” Section 230 of the Communications Decency Act.

For the full discussion and analysis on the First Amendment arguments, it’s best to review the full opinion, however, the Court’s opinion opened with the following summary of its position (about the First Amendment as applied to the law):

“With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.”

With respect to the preemption argument made by Plaintiffs, that is that Section 230 of the Communications Decency Act preempts the law because it imposes liability on websites by treating them as publishers. As the Court outlines (some citations to cases omitted):

The Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). The Act has an express preemption provision which states that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3).

As compared to the section of the Opinion regarding the First Amendment, the Court gives very little analysis on the Section 230 preemption claim beyond making the following statements:

“A plain reading of the Hateful Conduct Law shows that Plaintiffs’ argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of “hateful conduct” and for failure to disclose their policy on how they will respond to complaints. N.Y. Gen. Bus. Law § 394-ccc(5). The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.” (emphasis added)

Hold up sparkles. So the Court recognizes the fact that platforms cannot be held liable (in these instances anyway) for third-party content, no matter how ugly that content might be, but yet wants to force (punish in my opinion) a platform by forcing them to spend big money on development to create all these content reporting mechanisms, and set transparency policies, for content that they actually have no legal requirement to remove? How does this law make sense in the first place? What is the point (besides trying to trap them into having a policy that if they don’t follow could give rise to an action for unfair or deceptive advertising)? This doesn’t encourage moderation. In fact, I’d argue that it does the opposite and encourages a website to say “we don’t do anything about speech that someone claims to be harmful because we don’t want liability for failing to do so if we miss something.” In my mind, this is a punishment, based upon third-party content. You don’t need a “reporting mechanism” for content that people aren’t likely to find offensive (like cute cat videos). To this end, I can see why Plaintiffs raised a Section 230 preemption argument … because if you drill it down, the law is still trying to force websites to take an action to deal with undesirable third-party content (and then punish them if they don’t follow whatever their policy is). In my view, it’s an attempt to do an end run around Section 230. The root issue is still undesirable third-party content. Consequently, I’m not sure I agree with the Court’s position here. I don’t think the court drilled down enough to the root of the issue.

Either way, the Court did, as explained in the beginning, grant Plaintiff’s Motion for Preliminary Injunction (based upon the First Amendment arguments) which, at current, prohibits New York from trying to enforce the law.

Citation: Volokh v. James, Case No. 22-cv-10195 (ALC) (S.D.N.Y., Feb. 14, 2023)

DISCLOSURE: This is not mean to be legal advice nor should it be relied upon as such.

Pro Se’s kitchen sink approach results in a loss – Lloyd v. Facebook

The “kitchen sink approach” isn’t an uncommon complaint claim strategy when it comes to filing lawsuits against platforms. Notwithstanding decades of precedent clearly indicating that such efforts are doomed to fail, plaintiffs still give it the ole’ college try. Ironically, and while this makes more sense with pro se plaintiffs because they don’t have the same legal training and understanding of how to research case law, pro se plaintiffs aren’t the only ones who try it … no matter how many times they lose. Indeed, even some lawyers like to get paid to make losing arguments. [Insert the hands up shrug emoji here].

Plaintiff: Susan Lloyd

Defendants: Facebook, Inc.; Meta Platforms, Inc.; Mark Zuckerberg (collectively, “Defendants”)

In this instance Plaintiff is a resident of Pennsylvania who suffers from “severe vision issues”. As such, she qualified as “disabled” under the Americans with Disabilities Act (“ADA”). Ms. Lloyd, like approximately 266 million other Americans, uses the Facebook social media platform, which as my readers likely know, is connected to, among other things, third-party advertisements.

While the full case history isn’t recited in the Court’s short opinion, it’s worth while to point out (it appears anyway with the limited record before me at this time) that the Plaintiff was afforded the opportunity to amend her complaint multiple times as the Court cites to the Third Amended Complaint (“TAC”). According to the Court Order, the TAC alleged claims violations of:

Plaintiff alleged problems with the platform – suggesting it inaccessible to disabled individuals with no arms or problems with vision (and itemized a laundry list of issues that I won’t cite here … but suffice it to say that there was a complaint about the font size not being able to be made larger). [SIDE NOTE: For those that are unaware, website accessibility is a thing, and plaintiffs can, and will, try to hold website operators (of all types, not just big ones like Facebook) accountable if they deem there to be an accessibility issue. If you want to learn a little more, you can read information that is put out on the Beebe Law website regarding ADA Website Compliance.]

Plaintiff alleged that the advertisements on Facebook were tracking her without her permission … except that users agree to Facebook’s Terms of Service (which presumably allow for that since the court brought it up). I’m not sure at what point people will realize that if you are using something for free, you ARE the product. Indeed, there are many new privacy laws being put into place throughout various states (e.g., California, Colorado, Utah, Virginia and Connecticut) but chances are, especially with large multi-national platforms, they are on top of the rules and are ensuring their compliance. If you aren’t checking your privacy settings, or blocking tracking pixels, etc., at some point that’s going to be on you. Technology gives folks ways to opt out – if you can locate it. I realize that sometimes these things can be hard to find – but often a search on Google will land you results – or just ask any late teen early 20s person. They seem to have a solid command on stuff like this these days.

Plaintiff also alleged that Defendants allowed “over 500 people to harass and bully Plaintiff on Facebook.” The alleged allegations of threats by the other users are rather disturbing and won’t be repeated here (though you can review the case for the quotes). However, Plaintiff stated that each time that she reported the harassment she, and others, were told that it didn’t violate community standards. There is more to the story where things have allegedly escalated off-line. The situation complained about, if true, is quite unsettling … and anyone with decency would be sympathetic to Plaintiff’s concerns.

[SIDE NOTE: Not to suggest that I’m suggesting what happened, if true, wasn’t something that should be looked at and addressed for the future. I’m well aware that Facebook (along with other social media) have imperfect systems. Things that shouldn’t be blocked are blocked. for example, I’ve seen images of positive quotes and peanut butter cookies be blocked or covered from initial viewing as “sensitive”. On the other hand, I’ve also seen things that (subjectively speaking but as someone who spent nearly a decade handling content moderation escalations) should be blocked, that aren’t. Like clearly spammy or scammer accounts. We all know them when we see them yet they remain even after reporting them. I’ve been frustrated by the system myself … and know well both sides of that argument. Nevertheless, if one was to take into account the sheer volume of posts and things that come in you’d realize that it’s a modern miracle that they have any system for trying to deal with such issues at all. Content moderation at scale is incredibly difficult.]

Notwithstanding the arguments offered, the court was quick to procedurally dismiss all but the breach of contract claim because the claims were already dismissed prior (Plaintiff apparently re-plead the same causes of action). More specifically, the court dismissed the ADA and Rehabilitation claim because (at least under the 9th Cir.) Facebook is not a place of public accommodation under Federal Law. [SIDE NOTE: there is a pretty deep split in the circuits on this point – so this isn’t necessarily a “get out of jail free” card if one is a website operator – especially if one may be availing themselves to the jurisdiction of another circuit that wouldn’t be so favorable. Again, if you’re curious about ADA Website Compliance, check out the Beebe Law website]. Similarly, Plaintiff’s Unruh Act claim failed because the act doesn’t apply to digital-only website such as Facebook. Plaintiff’s fraud and intentional misrepresentation claims failed because there wasn’t really any proof that Facebook intended to defraud Plaintiff and only the Terms of Service were talked about. So naturally, if you can’t back up the claims, it ends up being a wasted argument. Maybe not so clear for Pro Se litigants, but this should be pretty clear to lawyers (still doesn’t keep them from trying). Plaintiff’s claims for invasion of privacy, negligence, and negligent infliction of emotional distress failed because they are barred by Section 230 of the Communications Decency Act, 47 U.S.C. § 230. Again, this is another one of those situations where decades of precedent contrary to a plaintiff’s position isn’t a deterrent from trying to advance such claims anyway. Lastly, the claims against Zuckerberg were dismissed because Plaintiff didn’t allege that he was personally involved or directed the challenged acts (i.e., he isn’t an “alter ego”).

This left the breach of contract claim. Defendants in this case argued that Plaintiff’s claim for breach of contract should be dismissed because the Court lacks diversity jurisdiction over the claim because she cannot meet the amount in controversy. As the Court explains, “28 U.S.C. §1332 grants federal courts’ original jurisdiction over civil actions where the amount in controversy exceeds $75,000 and the parties are citizens of different states.” Indeed, they are parties are from different states, however, that requirement that the amount in controversy is to exceed $75,000 is where Plaintiff met an impossible hurdle. As discussed prior, users of Facebook all agree to Facebook’s Terms of Service. Here, Plaintiff’s claim for breach of contract is based on conduct of third-party users and Facebook’s Terms of Service disclaim all liability for third-party conduct. Further, the TOS also provide, “aggregate liability arising out of.. .the [TOS] will not exceed the greater of $100 or the amount Plaintiff has paid Meta in the past twelve months.” Facebook having been around the block a time or two with litigation have definitely refined their TOS over the years to make it nearly impenetrable. I mean, never say never, BUT…good luck. Lastly, the TOS precludes damages for “lost profits, revenues, information, or data, or consequential, special indirect, exemplary, punitive, or incidental damages.” Based upon all of these issues, there is no legal way that Plaintiff could meet the required amount in controversy of $75,000. The Court dismissed the final remaining claim, breach of contract, without leave to amend, although the court did add in “[t]he Court expresses no opinion on whether Plaintiff may pursue her contract claim in state court.” One might construe that as a sympathetic signal to the Plaintiff (or other future Plaintiffs)…

There are a few takeaways from this case, in my opinion:

  1. Throwing garden variety kitchen sink claims at platforms, especially ones the size of Facebook, is likely to be a waste of ink on paper on top of the time it takes to even put the ink on the paper in the first place. If you have concerns about issues with a platform, engage the services of an Internet lawyer in your area that understands all of these things.
  2. Properly drafted, and accepted, Terms of Service for your website can be a huge shield from liability. This is why copying and pasting from some random whatever site or using a “one-size-fits-all” free form from one of those “do-it-yourself” sites is acting penny wise and pound foolish. Just hire a darn Internet lawyer to help you if you’re operating a business website. It can save you money and headache in the long run – and investment into the future of your company if you will.
  3. Website Accessibility, and related claims, is a thing! You don’t hear a lot about it because the matters don’t typically make it to court. Many of these cases settle based upon demand letters for thousands of dollars and costly remediation work … so don’t think that it can’t happen to you (if you’re operating a website for your business).

Citation: Lloyd v. Facebook, Inc., Case No. 21-cv-10075-EMC (N.D. Cal, Feb. 7, 2023)

DISCLAIMER: This is for general information only. This is not legal advice nor should it be relied upon as such. If you have concerns regarding your own specific situation, be sure to reach out to an attorney in your jurisdiction who may be able to advise you of your rights.

GoDaddy not liable for third-party snagging prior owned domain – Rigsby v. GoDaddy Inc.

This case should present as a cautionary tale of why you want to ensure you’ve got your auto-renewals on, and you’re ensuring the renewal works, for your website domains if you plan on using them long term for any purpose. Failing to renew timely (or ensuring there is actual renewal) can have unintended frustrating consequences.

Plaintiffs-Appellants: Scott Rigsby and Scott Rigsby Foundation, Inc. (together “Rigsby”).

Defendants-Appellees: GoDaddy, Inc., GoDaddy.com, LLC, and GoDaddy Operating Company, LLC and Desert Newco, LLC (together “GoDaddy”).

Scott Rigsby is a physically challenged athlete and motivational speaker who started the Scott Rigsby Foundation. In 2007, in connection with the foundation he registered the domain name “scottrigsbyfoundation.org” with GoDaddy.com. Unfortunately, and allegedly as a result of a glitch in GoDaddy’s billing system, Rigsby failed to pay the annual renewal fee in 2018. In these instances, typically the domain will then be free to purchase by anyone and this is exactly what happened – a third-party registered the then-available domain name and turned it into a gambling information site. Naturally this is a very frustrating situation for Rigsby.

Rigsby then decided to sue GoDaddy for violations of the Lanham Act, 15 U.S.C. § 1125(a) (which for my non-legal industry readers is the primary federal trademark statute in the United States) and various state laws and sought declaratory and injunctive relief including return of the domain name.

This legal strategy is most curious to me because they didn’t name the third-party that actually purchased the domain and actually made use of it. For those that are unaware, “use in commerce” by the would be trademark infringer is a requirement of the Lanham Act and it seems like a pretty long leap to suggest that GoDaddy was the party in this situation that made use of subject domain.

Rigsby also faced another hurdle, that is, GoDaddy has immunity under the Anticybersquatting Consumer Protection Act, 15 U.S.C. § 1125(d) (“ACPA”). The ACPA limits the secondary liability of domain name registrars and registries for the act of registering a domain name. Rigsby would be hard pressed to show that GoDaddy registered, used, or trafficked in his domain name with a bad faith intent to profit. Similarly, Rigsby would also be hard pressed to show that GoDaddy’s alleged wrongful conduct surpassed mere registration activity.

Lastly, Rigsby faced a hurdle when it comes to Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’ve written about Section 230 may times in my blogs, but in general Section 230 provides immunity to websites/platforms from claims stemming from the content created by third-parties. To be sure, there are some exceptions, including intellectual property law claims. See 47 U.S.C. § 230(e)(2) there wasn’t an act done by GoDaddy that would fairly sit square within the Lanham Act such that they would have liability. So this doesn’t apply. Additionally, 47 U.S.C. § 230(e)(3) preempts state law claims. Put another way, with a few exceptions, a platform will also avoid liability from various state law claims. As such, Section 230 would shield GoDaddy from liability for Rigsby’s state-law claims for invasion of privacy, publicity, trade libel, libel, and violations of Arizona’s Consumer Fraud Act. These are garden variety tort law claims that plaintiff’s will typically assert in these kinds of instances, however, plaintiffs have to be careful that they are directed at the right party … and it’s fairly rare that a platform is going to be the right party in these situations.

The District of Arizona dismissed all of the claims against GoDaddy and Rigsby then appealed the dismissal to the Ninth Circuit Court of Appeals. While sympathetic to the plight of Rigsby, the court correctly concluded, on February 3, 2023, that Rigsby was barking up the wrong tree in terms of who they named as a defendant and appropriately dismissed the claims against GoDaddy.

To read the court’s full opinion which goes into greater detail about the facts of this case, click on the citation below.

Citation: Rigsby v. GoDaddy, Inc., Case No. 21016182 (9th Cir. Feb. 3, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

Section 230 doesn’t protect against a UGC platform’s own unlawful conduct – Fed. Trade Comm’n v. Roomster Corp

This seems like a no-brainer to anyone who understands Section 230 of the Communications Decency Act but for some reason it still hasn’t stopped defendants from making the tried and failed argument that Section 230 protects a platform from their own unlawful conduct.

Plaintiffs: Federal Trade Commission, State of California, State of Colorado, State of Florida, State of Illinois, Commonwealth of Massachusetts, and State of New York

Defendants: Roomster Corporation, John Shriber, indivudally and officer of Roomster, and Roman Zaks, individually and as an officer of Roomster.

Roomster (roomster.com) is an internet-based (desktop and mobile app) room and roommate finder platform that purports to be an intermediary (i.e., the middle man) between individuals who are seeking rentals, sublets, and roommates. For anyone that has been around for a minute in this industry, you might be feeling like we’ve got a little bit of a Roommates.com legal situation going on here but it’s different. Roomster, like may platforms that allows third-party content also known as User Generated Content (“UGC”) platforms, does not verify listings or ensure that the listings are real or authentic and has allegedly allowed postings to go up where the address of the listing was a U.S. Post Office. Now this might seem out of the ordinary to an every day person reading this, but I can assure you, it’s nearly impossible for any UGC platform to police every listing, especially if they are a small company and have any reasonable volume of traffic and it would become increasingly hard to try and moderate as they grow. That’s just the truth of operating a UGC platform.

Notwithstanding these fake posting issues, Plaintiffs allege that Defendants have falsely represented that properties listed on the Roomster platform are real, available, and verified. [OUCH!] They further allege that Defendants have created or purchased thousands of fake positive reviews to support these representations and placed fake rental listings on the Internet to drive traffic to their platform. [DOUBLE OUCH!] If true, Roomster may be in for a ride.

The FTC has alleged that Defendants’ acts or practices violate Section 5(a) of the FTC Act, 15 U.S.C. § 45(a) (which in layman terms is the federal law against unfair methods of competition) and the states have alleged the various state versions of deceptive acts and practices. At this point, based on the alleged facts, it seems about right to me.

Roomster filed a Motion to Dismiss pursuant to Rule 12(b)(6) for Plaintiffs alleged failure to state a claim for various reasons that I won’t discuss, but you can read about in the case, but also argued that “even if Plaintiffs may bring their claims, Defendants cannot be held liable for injuries stemming from user-generated listings and reviews because … they are interactive computer service providers and so are immune from liability for inaccuracies in user-supplied content, pursuant to Section 230 of the Communications Decency Act, 47 U.S.C. § 230.” Where is the facepalm emoji when you need it? Frankly, that’s a “hail-mary” and total waste of an argument … because Section 230 does not immunize a defendant from liability from its own unlawful conduct. Indeed, a platform can be held liable for for offensive content on its service or system if it contributes to the development of what makes the content unlawful. This is also true where a platform has engaged in deceptive practices, or has had direct participation in a deceptive scheme. Fortunately, like many courts before it, the court in this case saw through the crap and rightfully denied the Motion to Dismiss on this (and other points).

I smell a settlement in the air, but only time will tell.

Case Citation: Fed. Trade Comm’n v. Roomster Corp., Case No. 22 Civ 7389 (S.D. N.Y., Feb. 1, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

Anti-SLAPP Laws Without Mandatory Award of Fees and Costs is a Hinderance to the Access to Justice and Chills Free Speech

Arizona recently passed a new anti-SLAPP law, 2022 Ariz. HB 2722 (it’s not in effect yet and won’t be for a few months at least) and while a colleague of mine and are are working on a more comprehensive discussion about anti-SLAPP and this new law specifically (which I will link here once done and/or you can always follow me here or on various social media to get the latest) as I was writing the initial draft of that article this week I became more and more frustrated. Anti-SLAPP laws without a mandatory award of attorneys fees and costs to the prevailing party of such motion is a hindrance to the access to justice for real victims of SLAPP suits and chills free speech. How? Let me elaborate.

I should preface this with the fact that I spent the better part of a decade working as in-house counsel of an interactive online forum and I’ve pretty much seen it all when it comes to true victims sharing their honest stories (and being threatened because if it) and bad actors using the Internet as a source of revenge (where people are desperate to make the harassment stop and to remove untruthful, hurtful, information from the platform). As such, my opinion is through a lens of having heard countless stories from all sides.

Generally speaking (obviously there are always outliers) those who lawfully criticize wrongdoers, especially online, do so because they don’t have the means to file suit regarding the experience that led to the criticism. Complaining online is their remedy. If those being criticized are powerful and/or wealthy, it’s really easy to say “Take that content down or I’ll sue you.” Many Americans are living paycheck to paycheck, but even if they are comfortably above that, they often cannot afford to be sued. Just look at how long it took to get through the Depp/Heard case. Granted, that was where two parties were heavily pushing back on who was right … but this is not unlike many civil cases. In fact, the behaviors exhibited in that court room and on display for the watching world to see is not all that unusual for litigating parties. The only difference there is that it was televised and people care enough about celebrity dirt to watch the case unfold on live television/online streaming.

But if you aren’t a celebrity or wealthy individual … if you cannot afford to fight back through expensive lawyers, even if you’re in the right … what do you do? Chances are you begrudgingly remove the content to save your own pocket book, or worse, lose a legal action and end up with a, albeit by default, judgment against you if you cannot, for whatever reason (and there are many reasons) don’t appear in a case. Ahh, yes … the threat of a SLAPP suit is indeed a huge and powerful sword.

But what happens if you cannot remove the content because the website’s terms of service prohibit it, or such posting has been scraped and put up elsewhere such that you do not have control over it? Oh yes, this happens all the time online. People don’t read Terms of Service and unfortunately, copy cat websites scrape content that isn’t theirs. In this instance, chances are, you will get sued anyway. Why? Because it’s worth it for the wealthy/powerful to try to get a court order to remove the content from the internet and they can’t do that without a suit. After all, many platforms will honor court orders for content removal even if they are obtained by default.

And in a lot of ways, this makes sense. Especially when bad actors/defamers hide behind anonymous accounts and/or are in foreign countries that make pursuing the perpetrator cost prohibitive or near impossible for real victims. Real victims need relief and this is one such pathway to remedy. On the other hand, for the truth tellers, it can be hard to stand up to wealthy/powerful bad actors when faced with a lawsuit. Those who speak up honestly can get the short end of the stick. If a suit is filed, and they can’t afford to defend against it, are they to be victimized yet again by default? I know it happens. I’ve seen it happen. Let me give you an example.

Imagine with me for a moment that you are a business owner of a new start-up company called Cool Business, LLC operating in Arizona, and you want to engage the services of a advertising company. Your friend, Tim, gives you the name of Great Advertising Co. based out of New York. A New York advertising company sounds fancy and you think they will probably do a far better job than anyone here in little Arizona so you reach out to them. The conversation goes great, they send you a basic contract to sign for the work to be done for Cool Business, LLC and require a $6,000.00 deposit so they can get started on the work and another $4,000.00 in 90 days for a total contract of $10,000.00 over three months. You skim the agreement, gloss over the headings of the boilerplate terms (because they’re all the same, right?), sign it and send them the $6,000.00. Everything goes great at first, but months into the relationship, and dozens of calls later, you realize that Great Advertising Co. is flakey. They aren’t delivering the services on time, there is always an excuse for why the work isn’t done, but when the 90 days hits, they still ask for their additional $4,000.00 pursuant to the contract. The business relationship at this point has soured. Great Advertising Co. demands their additional $4,000.00 under the contract, which you refuse to pay, and you instead demand a refund of your $6,000.00. Great Advertising Co. refuses to refund you the $6,000.00. Pissed off, you take your story to your favorite business attorney in Arizona and she reviews your contract and advises you that while you may have a breach of contract claim, the terms of your contract say that you agree to litigate any matters stemming from the agreement in a court in New York and that because the contract is with Cool Business, LLC that you’d have to hire a lawyer, in the state of New York, to handle the matter for you because businesses have to be represented by a lawyer in the court that you’d have to file in. Knowing that New York lawyers can be very expensive, you decide it’s not worth the hassle and to cut your losses. Understandably being upset, however, you take to the Internet to tell everyone you know how, truthfully, Great Advertising Co. ripped you off and you explain in detail what happened. You post your reviews to Google, Yelp, Facebook and any other place you can find to help spread the word about these unscrupulous business tactics and you leave it at that. Ten months later you receive a letter from a Great Advertising Co.’s New York lawyer telling you that you need technically still owe the $4,000.00 under the contract and that Great Advertising Co. doesn’t appreciate the negative reviews and demands that you immediately remove them or they will file a lawsuit against you for defamation. You ignore the letter because you know that you have a good breach of contract case and the First Amendment on your side because what you said was 100% the truth and you know, after talking to your favorite defamation attorney a few years back, you know that the truth is a defense to a claim of defamation. A day prior to the one year anniversary of your pissed off customer online posting tirade you are served with a complaint, based out of New York for defamation. You’ve watched the Johnny Depp and Amber Heard defamation trial. You saw how long that case was drug out and you know that you don’t have the funds to pay an attorney to fight for your rights in New York. You didn’t even have the funds to hire a New York attorney to bring a breach of contract case against Great Advertising Co. to try and get your $6,000.00 back. As such, feeling defeated, and without talking to your favorite defamation attorney again, you just ignore the complaint. You figure, what’s the worst that can happen. Great Advertising Co. obtains a default judgment against you individually with an order to take down the content and the judge awards $2,500.00 in damages.

Now, this entire hypothetical, while obviously facts have been changed and such, is based off a true story of what one individual experienced and how these types of situations can go south in a hurry. There are countless similar stories just like this out there. Good folks are victimized not just once, by the initial acts, but twice in some instances like in this hypothetical. But this is where good anti-SLAPP laws come into play.

Anti-SLAPP laws are designed to fight back against those who file lawsuits just to try and silence their critics, but without the promise of attorney fees and costs for the work, victims of little means are hard pressed to find lawyers willing to help (hence the hinderance to access to justice). The sad truth is that most lawyers (like most professions) cannot afford to work for free – being a professional is expensive and it’s not getting any cheaper. When anti-SLAPP laws include such fee provisions, it’s a lot easier for attorneys to consider taking on a SLAPP case, with low or no money down, case because they know they will get paid when they win. This is of course presuming it’s a deep pocket that filed the SLAPP in the first place because the reality is a judgment is only worth one’s ability to collect.

When anti-SLAPP laws fail to include such provisions, there is little deterrent to filing a SLAPP suit. Yes, if the little person being picked on has means, maybe they will think twice but that’s not often the case and the SLAPP filers know, and bank on, the litigation causing financial hardship or stress so that the truth teller will simply give in to the demands to remove the content prior to even answering the complaint, thus chilling truthful speech. It’s a powerful tactic. If it wasn’t, there wouldn’t be so many states with anti-SLAPP laws trying to curb such problems in the first place.

As many legal practitioners are painfully aware, it can be very difficult to get a judge to award attorneys fees and costs absent it being statutorily required. So even if you fight against a SLAPP suit, and win, you could still be out tens of thousands of dollars (or more depending on the case) with no guarantee of recovery. As an attorney, when you have to tell potential clients this, you can see the defeat in people’s faces before you even get going. It’s scary. What average person has tens of thousands of dollars laying around to pay to a lawyer to fight for their First Amendment right to free speech?

Would those odds make you excited about standing up for yourself? I think not. If you knew all this, would you be so willing to share with the public honest information about bad actors and you personal experience? I think not.

And this doesn’t just go for complaining consumers, but also for investigative journalists. If you think a random, but bigger company, going after an unhappy customer who got ripped off is bad and complained about it is bad … imagine what a powerful elite will try to do to an investigative journalist trying to uncover some very serious dirty laundry and expose it to the world?

Bottom line, for any anti-SLAPP law to be a true shield, among other things, it must contain, at minimum, a statutory award of attorney fees and costs.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

#firstamendment #defamation #antiSLAPP #legislation #accesstojustice

California Assembly Bill 1678 designed to protect against age discrimination gets tagged by Ninth Circuit on First Amendment grounds: IMDb.com, Inc. v. Becerra

On June 19, 2020 the Ninth Circuit Court of Appeals ruled that the content-based restrictions on speech contained within California’s Assembly Bill 1678 was facially unconstitutional because it “does not survive First Amendment scrutiny.”

I feel like if you live outside of glamorous places like California, New York and Florida, you may not be paying attention to laws being pushed by organizations like the Screen Actors Guild aka “SAG,” nevertheless … I try to keep my ear to the ground for cases that involve the First Amendment and Section 230 of the Communications Decency Act. This case happens to raise both issues, although only the First Amendment matter is addressed here.

For those that may be unfamiliar, IBDb.com is an Internet Movie Database which provides a free public website that includes information about movies, television shows, and video games. It also contains information information on actors and crew members in the industry which may contain the subject’s age or date of birth. This is an incredibly popular site, the court opinion noting that as of January of 2017 “it ranked 54th most visited website in the world.” The information on the site is generated by users (just like you and I) but IMDb does employ a “Database Content Team tasked with reviewing the community’s additions and revisions for accuracy.”

Outside of the “free” user generated section, IMDb also introduced, back in 2002, a subscription-based service called “IMDbPro” for the industry professionals (actors/crew and recruiters) to essentially act as a LinkedIn but for Hollywood – providing space for professionals to upload resume type information, headshots, etc. and casting agents could search the database for talent.

Back in 2016 apparently SAG pushed for regulation in California (which was enacted as Assembly Bill 1687) that arguably targeted IMDb, in effort to curtail alleged age discrimination in the entertainment industry. No doubt a legitimate concern (as it is in many industries) however, often good intentions result in bad law.

AB 1687 was signed into law, codified at Cal. Civ. Code § 1798.83.5 and included the following provision:

A commercial online entertainment employment service provider that enters into a contractual agreement to provide employment services to an individual for a subscription payment shall not, upon request by the subscriber, do either of the following: (1) [p]ublish or make public the subscriber’s date of birth or age information in an online profile of the subscriber [or] (2) [s]hare the subscriber’s date of birth or age information with any Internet Web sites for the purpose of publication.

Cal. Civ. Code § 1798.83.5(b)(1)-(2)

The statute also provides, in pertinent part:

A commercial online entertainment employment service provider subject to subdivision (b) shall, within five days, remove from public view in an online profile of the subscriber the subscriber’s date of birth and age information on any companion Internet Web sites under its control upon specific request by the subscriber naming the Internet Web sites.

Cal. Civ. Code § 1798.83.5(c)

The practical affect of these provisions is that it requires that subscribers of IMDbPro, be able to request that IMDb, and that IMDb, upon such request, remove the subscriber’s age or date of birth from the subscriber’s profile (which I would think they could do on their own to the extent they have control over such profile data) AND, more problematically, anywhere else on their website where such information exists regardless of who created that content. This is now extending to content the IMDbPro subscribers may not have control over as it may have been generated by third-party users of the site.

The Court opinion explained that “[b]efore AB 1687 took effect, IMDb filed a complaint under 42 U.S.C § 1983 in the Northern District of California to prevent its enforcement. IMDb alleged that AB 1687 violated both the First Amendment and Commerce Clause of the Constitution, as well as the Communications Decency Act, 47 U.S.C. § 230(f)(2).” While there was much back and forth between the parties, the crux of the debate, and crucial for the appeal was the debate over the language prohibiting IMDb’s ability to publish the age of information without regard to the source of the information.

When considering the statutory language restricting what could be posted the Court of Appeals concluded:

  • AB 1687 implemented content-based restriction on speech (i.e., dissemination of date of birth or age) that is subject to First Amendment scrutiny.
  • AB 1687 did not present a situation where reduced protection would apply (e.g., where the speech at issue is balanced against a social interest in order and morality).
    • IMDb’s content did not constitute Commercial Speech.
    • IMDb’s content did not facilitate illegal conduct.
    • IMDb’s content did not implicate privacy concerns.
  • AB 1687 does not survive strict scrutiny because it was not the least restrictive means to accomplish the goal and it wasn’t narrowly tailored.

In conclusion the Court articulated a position that I wholly agree with: “Unlawful age discrimination has no place in the entertainment industry, or any other industry. But not all statutory means of ending such discrimination are constitutional.”

Citation: IMDb.com, Inc. v. Becerra, Case Nos. 18-15463, 18-15469 (9th Cir. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Facebook’s Terms of Service set jurisdiction for litigation – We Are the People, Inc. v. Facebook, Inc.

A common mistake, and arguably a waste of time, is to attempt to bring a breach of contract litigation in a jurisdiction other than the jurisdiction that the contract states. Years ago I wrote an article about the importance of boilerplate terms. One of the very first points I discuss is choice of law/choice of forum clauses.

Most people who are entering into a contract read the contract before they sign their name. Curiously, this doesn’t seem to translate when people are signing up for a website or app. I actually wrote about this too, warning people that they are responsible for their own actions when it comes to website Terms of Service and that they should read them before they sign up. Alas, we’re all human and the only real time people tend to look at the Terms of Service (i.e., the use contract) is when the poo has hit the fan. Even then, the first thing most people look at (or should look at if they are considering litigation) is the choice of law provisions.

In this instance, Plaintiff’s brought a lawsuit against Facebook in the Southern District of New York alleging that Facebook’s removal of content from Facebook’s pages violated Facebook’s “contractual and quasi-contractual obligations to keep Plaintiffs’ content posted indefinitely.” Anyone who has ever used Facebook would likely realize that the “contract” being discussed would stem from their Terms of Service. Facebook filed a motion to dismiss based upon Section 230 of the Communications Decency Act or, alternatively, to transfer venue.

Why would Facebook want to transfer venue? Because arguably California has better law for them. California has a strong anti-SLAPP law codified at Cal. Civ. Proc. § 425.16 (which applies to many cases that Facebook is likely to be named in) and many Section 230 cases have been ruled upon favorably to platforms. As such, Facebook’s Terms of Service contains a forum selection clause that requires any disputes over the contract be heard by a court in California; more specifically, exclusively in the Northern District of California (or a state court located in San Mateo County).

As I see it, these Plaintiffs either didn’t bother to read that part of the Terms of Service or they wanted to roll the dice and see if Facebook wouldn’t notice (Pro-tip: fat chance of that working). Regardless of the rationale, on June 3, 2020 the court quickly sided with Facebook ruling that the Terms of Service forum selection clause was “plainly mandatory” absent some showing that such clause was unenforceable (which Plaintiffs failed to do and, according to the Court, could not do in this particular circumstance (given Defendants’ memorandum of law) and Facebook’s Motion to Transfer was granted.

Citation: We Are the People, Inc. v. Facebook, Inc., Case No. 19-CV-8871 (JMF) (S.D. N.Y. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

It’s hard to find caselaw to support your claims when you have none – Wilson v. Twitter

When the court’s opinion is barely over a page when printed, it’s a good sign that the underlying case had little to no merit.

This was a pro se lawsuit, filed against Twitter, because Twitter suspended at least three of Plaintiff’s accounts which were used to “insult gay, lesbian, bisexual, and transgender people for violating the company’s terms of service, specifically its rule against hateful conduct.”

Plaintiff sued Twitter alleging that “[Twitter] suspended his accounts based on his heterosexual and Christian expressions” in violation of the First Amendment, 42 U.S.C. § 1981, Title II of the Civil Rights Act of 1964, and for alleged “legal abuse.”

The court was quick to deny all of the claims explaining that:

  1. Plaintiff had no First Amendment claim against Twitter because Twitter was not a state actor; having to painfully explain that just because Twitter was a publicly traded company it doesn’t transform Twitter into a state actor.
  2. Plaintiff had no claim under § 1981 because he didn’t allege racial discrimination.
  3. Plaintiff’s Civil Rights claim failed because: (1) under Title II, only injunctive relief is available (not damages like Plaintiff wanted); (2) Section 230 of the Communications Decency Act bars his claim; and (3) because Title II does not prohibit discrimination on the basis of sex or sexual orientation (an no facts were asserted to support this claim).
  4. Plaintiff failed to allege any conduct by Twitter that cold plausibly amount to legal abuse.

The court noted that Plaintiff “expresses his difficulty in finding case law to support his claims.” Well, I guess it would be hard to find caselaw to support claims when you have no valid ones.

Citation: Wilson v. Twitter, Civil Action No. 3:20-0054 (S.D. W.Va. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Section 230, the First Amendment, and You.

Maybe you’ve heard about “Section 230” on the news, or through social media channels, or perhaps by reading a little about it through an article written by a major publication … but unfortunately, that doesn’t mean that the information that you have received is necessarily accurate. I cannot count how many times over the last year I’ve seen what seems to be purposeful misstatements of the law … which then gets repeated over and over again – perhaps to fit some sort of political agenda. After all, each side of the isle so to speak is attacking the law, but curiously for different reasons. While I absolutely despise lumping people into categories, political or otherwise, the best way I can describe the ongoing debate is that the liberals believe that there is not enough censoring going on, and the conservatives think there is too much censorship going on. Meanwhile, you have the platforms hanging out in the middle often struggling to do more, with less…

In this article I will try to explain why I believe it is important that even lay people understand Section 230 and dispel some of the most common myths that continually spread throughout the Internet as gospel … even from our own Congressional representatives.

WHY LAY PEOPLE SHOULD CARE ABOUT SECTION 230

Not everyone who reads this will remember what it was like before the Internet. If you’re not, ask your elders what it was like to be “talked at” by your local television news station or news paper. There was no real open dialog absent face to face or over the telephone communications. Your audience was limited in who you would get to share information with. Even if you wrote a “letter to the Editor” at a local newspaper it didn’t mean that your “opinion” was necessarily going to be posted. If you wanted to share a picture, you had to actually use a camera and film, take it to a developer, wait two weeks, pay for the developing and pray that your pictures didn’t suck. Can’t tell you how many blurry photographs I have in a shoe box somewhere. Then you had to mail, hand out, or show your friends in person. And don’t even get me started about a phone that was stuck to the wall and your “privacy” was limited to having a long phone chord that might stretch into the bathroom so you could shut the door. If you’re old end enough to remember that, and are nodding your head in agreement … I encourage you to spend some time remembering what that was like. It seems that us non-digital natives are at a point in life that we take the technology we have for granted; and the digital natives (meaning they were born with all of this technology) don’t really know the struggles of life without it.

If you like being able to share information freely, and to comment on information freely, you absolutely should care about what many refer to as “Section 230.” So many of my friends, family and colleagues say “I don’t understand Section 230 and I don’t care to … that’s your space” yet these are the people that I see posting content online about their business via LinkedIn or other social media platforms, sharing reviews of businesses they have been to, looking up information on Wikimedia, sharing their general opinion and/or otherwise dialog and debate over topics that are important to them, etc. In a large way, whether you know it or not, Section 230 has powered your ability to interact online in this way and has drastically shaped the Internet as we know it today.

IN GENERAL: SECTION 230 EXPLAINED

The Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA” or even “CDA 230”), in brief, is a federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Generally speaking, if the platform didn’t actually create the content, they typically aren’t liable for it. Indeed, there are a few exceptions, but for now, we’ll keep this simple. Platforms that allow interactive third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, Tripadvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. Suffice it to say, there are WAY more little guys than there are big guys, or “Big Tech” as some refer to it.

If you’re looking for some sort of a deep dive on the history of the law, I encourage you to pick up a copy of Jeff Kosseff’s book titled The Twenty-Six Words That Created The Internet. It’s a great read!

ONGOING “TECHLASH” WITH SECTION 230 IN THE CROSS-HAIRS

One would be entirely naïve to suggest that the Internet is perfect. If you ask me, it’s far from perfect. I readily concede that indeed there are harms that happen online. To be fair, harms happen offline too and they always have. Sometimes humans just suck. I’ve discussed a lot of this in my ongoing blog article series Fighting Fair on the Internet. What has been interesting to me is that many seem to want to blame people’s bad behavior on technology and to try and hold technology companies liable for what bad people do using their technology.

I look at technology as a tool. By analogy, a hammer is a tool yet we don’t hold the hammer manufacturing company or the store that sold the hammer to the consumer liable when a bad guy goes and beats someone to death with it. I imagine the counter-argument is that technology is in the best position to help stop the harms. Perhaps that may be true to a degree (and I believe many platforms do try to assist by moderating content and otherwise setting certain rules for their sites) but the question becomes, should they actually be liable? If you’re a Section 230 “purist” the answer is “No.” Why? Because Section 230 immunizes platforms from liability for the content that other people say or do on their platforms. Platforms are still liable for the content they choose to create and post or otherwise materially contribute to (but even that is getting into the weeds a little bit).

The government, however, seems to have its’ own set of ideas. We already saw an amendment to Section 230 with FOSTA (the anti-sex trafficking amendment). Unfortunately, good intentions often make for bad law, and, in my opinion, FOSTA was one of those laws which has been arguably proven to cause more harm than good. I could explain why, but I’ll save that discussion for another time.

Then, in February of 2020, the DOJ had a “workshop” on Section 230. I was fortunate enough to be in the audience in Washington, D.C. where it was held and recently wrote an article breaking down that “workshop.” If you’re interested in all the juicy details, feel free to read that article but in summary it basically was four hours’ worth of : humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.

Shortly thereafter the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 or EARN IT Act of 2019-2020 Bill was dropped which is designed to prevent the online sexual exploitation of children. While this sounds noble (FOSTA did too) when you unpack it all, and look at the bigger picture, it’s more government attempts to mess with free speech and online privacy/security in the form of yet another amendment to Section 230 under the guise of being “for the children.” I have lots of thoughts on this, but I will save this for another article another day too.

This brings us to the most recent attack on Section 230. The last two (2) weeks have been a “fun” time for those of us who care about Section 230 and its application. Remember how I mentioned above that some conservatives are of the opinion that there is too much censorship online? This often refers to the notion that social media platforms (Facebook, Twitter, and even Google) censor or otherwise block conservative speech. Setting aside whether this actually happens or not (I’ve heard arguments pointing both directions on this issue) President Trump shined a big light on this notion.

Let me first start off by saying that there is a ton of misinformation that is shared online. It doesn’t help that many people in society will quickly share things without actually reading it or conducting research to see if the content they are sharing has any validity to it but will spend 15 minutes taking a data mining quiz only to find out what kind of a potato they are. As a side note, I made that up in jest and then later found out that there is a quiz to find out what kind of potato you are. Who knew the 2006 movie Idiocracy was going to be so prophetic? Although, I can’t really say this is somehow just something that happens online? Anyone that ever survived junior high and high school knows that gossip is often riddled with misinformation and somehow we seem to forget about the silliness that happens offline too. The Internet, however, has just given the gossipers a megaphone … to the world.

Along with other perceived harmful content, platforms have been struggling with how to handle such misinformation. Some have considered adding more speech by way of notifications or “labels” as Twitter calls them, to advise their users that the information may be wholly made up or modified, shared in a deceptive manner, likely to impact public safety or otherwise cause serious harm. Best I could tell, at least as far as Twitter goes, this seems to be a relatively new effort. Other platforms like Facebook have apparently resorted to taking people’s accounts down, putting odd cover ups over photos, etc. on content they deem “unworthy” for their users. Side note: While ideal in a perfect world, I’m not personally a fan of social media platforms fact checking because: 1) it’s very hard to be an arbiter of truth; 2) it’s incredibly hard to do it at scale; 3) once you start, people will expect you to do it on every bit of content that goes out – and that’s virtually impossible; and 4) if you fail to fact check something that turns out to be false or otherwise misleading, one might assume that such content is accurate because they come to rely on the fact checking. And who checks the fact checkers? Not that my personal opinion matters, but I think this is where this bigger tech companies have created more problems for themselves (and arguably all the little sites that rely on Section 230 to operate without fear of liability).

So what kicked off the latest “Section 230 tirade”? Twitter “fact checked” President Trump in two different tweets on May 26th, 2020 by adding in a “label” to the bottom of the Tweets (which you have to click on to actually see – they don’t transfer when you embed them as I’ve done here) that said “Get the facts about mail-in-ballots.” This clearly suggests that Twitter was in disagreement with information that the President Tweeted and likely wanted its users to be aware of alternative views.

https://twitter.com/realDonaldTrump/status/1265255845358645254?s=20

To me, that doesn’t seem that bad. I can absolutely see some validity to President Trump’s concern. I can also see an alternative argument, especially since I typically mail in my voting ballot. Either way, adding content in this way, versus taking it down altogether, seems like the route that provides people more information to consider for themselves, not less. In any event, if you think about it, pretty much everything that comes out of a politician’s mouth is subjective. Nevertheless, President Trump got upset over the situation and then suggested that Twitter was “completely stifling FREE SPEECH” and then made veiled threats about not allowing that to happen.

https://twitter.com/realDonaldTrump/status/1265427539008380928?s=20

If we know anything about this President, it is that when he’s annoyed with something, he will take some sort of action. President Trump ultimately ended up signing an Executive Order on “Preventing Online Censorship” a mere two (2) days later. For those that are interested, while certainly left leaning, and non-favorable to our commander in chief, Santa Clara Law Professor Eric Goldman provided a great legal analysis of the Executive Order, calling it “political theater.” Even if you align yourself with the “conservative” base, I would encourage you to set aside the Professor’s personal opinions (we all have opinions) and focus on the meat of the legal argument. It’s good.

Of course, and as expected, the Internet looses its mind and all the legal scholars and practitioners come out of the woodwork, commenting on Section 230 and the newly signed Executive Order, myself included. The day after of the Executive Order was signed (and likely President Trump read all the criticisms) he Tweeted out “REVOKE 230!”

https://twitter.com/realDonaldTrump/status/1266387743996870656?s=20

So this is where I have to sigh heavily. Indeed there is irony in the fact that the President is calling for the revocation of the very same law that allowed innovation and Twitter to even become a “thing” and which also makes it possible for him to reach out and connect to millions of people, in real time, in a pretty much unfiltered way as we’ve seen, for free because he has the application loaded on his smart phone. In my opinion, but for Section 230, it is entirely possible Twitter, Facebook and all the other forms of social media and interactive user sites would not exist today; at least not as we know it. Additionally, I find it ironic that President Trump is making free speech arguments when he’s commenting about, and on, a private platform. For those of you that slept through high school civics, the First Amendment doesn’t apply to private companies … more about that later.

As I said though, this attack on Section 230 isn’t just stemming from the conservative side. Even Joe Biden has suggested that Section 230 should be “repealed immediately” but he’s on the whole social media companies censor too little train which is completely opposite of the reasons that people like President Trump wants it revoked.

HOW VERY AMERICAN OF US

How many times have you heard that American’s are self-centered jerks? Well, Americans do love their Constitutional rights, especially when it comes to falling in love with their own opinions and the freedom to share those opinions. Moreover, when it comes to the whole content moderation and First Amendment debate, we often look at tech giants as purely American companies. True, these companies did develop here (arguably in large part thanks to Section 230) however, what many people fail to consider is that many of these platforms operate globally. As such, they are often trying to balance the rules and regulations of the U.S. with the rules and regulations of competing global interests.

As stated, Americans are very proud of the rights granted to them, including the First Amendment right to free speech (although after reading some opinions lately I’m beginning to wonder if half the population slept through or otherwise skipped high school civics class … or worse, slept through Constitutional Law while in law school). However, not all societies have this speech right. In fact, Europe’s laws value the privacy as a right, over the freedom of expression. A prime example of this playing out is Europe’s Right to Be Forgotten law. If you aren’t familiar, under this EU law, citizens can ask that even truthful information, but perhaps older, be taken down from the Internet (or in some cases not be indexed on EU search engines) or else the company hosting that information can face penalty.

When we demand that these tech giants cater to us, here in the United States, we are forgetting that these companies have other rules and regulations that they have to take into consideration when trying to set and implement standards for their users. What is good for us here in the U.S. may not be good for the rest of the world, which are also their customers.

SECTION 230 AND FIRST AMENDMENT MYTHS SPREAD LIKE WILDFIRE

What has been most frustrating to me, as someone who practices law in this area and has a lot of knowledge when it comes to the business of operating platforms, content moderation, and the applicability of Section 230, is how many people who should know better get it wrong. I’m talking about our President, Congressional representatives, and media outlets … so many of them, getting it wrong. And what happens from there? You get other people who regurgitate the same uneducated or otherwise purposefully misstatements in articles that get shared which further perpetuates the ignorance of the law and how things actually work.

For example, just today (June 8, 2020) Jeff Kosseff Tweeted out a thread that describes a history of the New York Times failing to accurately explain Section 230 in various articles and how one of these articles ended up being quoted by a NJ federal judge. It’s a good thread. You should read it.

MYTH: A SITE IS EITHER A “PLATFORM” OR A “PUBLISHER”

Contrary to so many people I’ve listened to speak, or articles that I’ve read, when it comes to online UGC platforms, there is no distinction between “publisher” and a “platform.”  You aren’t comparing the New York Times to Twitter.  Working for a newspaper is not like working for a UGC platform.  Those are entirely different business models … apples and oranges. Unfortunately, this is another spot where many people get caught up and confused. 

UGC platforms are not in the business of creating content themselves but rather in the business of setting their own rules and allowing third-parties (i.e., you and I here on this platform) to post content in accordance with those rules.  Even for those who point to some publications erring on the side of caution on 2006-2008 re editing UGC comments doesn’t mean that’s how the law actually was interpreted.  We have decades worth of jurisprudence interpreting Section 230 (which is what the judicial branch does – interprets the law, not the FCC which is an independent organization overseen by Congress). UPDATE 1/5/2021 – although now there is debate on whether or not they can and as of October 21, 2020 the FCC seems to think they do have such right to interpret it.  Platforms absolutely have the right to moderate the content which they did not create and kick people off of their platform for violation of their rules. 

Think if it this way – have you ever heard your parents say (or maybe you’ve said this to your own kids) “My house, my rules.  If you don’t like the rules, get your own house.”  If anyone actually researches the history, that’s why Section 230 was created … to remove the moderator’s dilemma.  A platform’s choice of what to allow, or disallow, has no bearing (for the sake of this argument here) on the applicability of Section 230.  Arguably, UGC platforms also have a First Amendment right to choose what they want to publish, or not publish. So even without Section 230, they could still get rid of content they didn’t deem appropriate for their users/mission/business model.

MYTH: PLATFORMS HAVE TO BE NEUTRAL FOR SECTION 230 TO APPLY

Contrary to the misinformation being spewed all over (including by government representatives – which I find disappointing) Section 230 has never had a “neutrality” caveat for protection.  Moreover, in the context of the issue of political speech, Senator Ron Wyden, who was a co-author for the law even stated recently on Twitter “let me make this clear: there is nothing in the law about political neutrality.” 

You can’t get much closer to understanding Congressional intent of the law than getting words directly from the co-author of the law. 

Quite frankly, there is no such thing as a “neutral platform.” That’s like saying a cheeseburger is a cheeseburger is a cheeseburger. Respectfully, some cheeseburgers from certain restaurants are just way better than others. Moreover, if we limited content on platforms to only what is lawful, i.e., a common carrier approach where the platforms would be forced to treat all legal content equally and refrain from discrimination, as someone that deals with content escalations for platforms, I can tell you that we would have a very UGLY Internet because sometimes people just suck or their idea of a good time and funny isn’t exactly age appropriate for all views/users.

MYTH: CENSORSHIP OF SPEECH BY A PLATFORM VIOLATES THE FIRST AMENDMENT

The First Amendment absolutely protects the freedom of speech.  In theory, you are free to put on a sandwich board that says (insert whatever you take issue with) and walk up and down the street if you want.  In fact, we’re seeing such constitutionally protected demonstrations currently with the protesters all over the country in connection to the death of George Floyd. Peaceful demonstration (and yes, I agree, not all of that was “peaceful”) is absolutely protected under the First Amendment. 

What the First Amendment does not do (and this seems to get lost on people for some reason) is give one the right to amplification of that speech on a private platform.  One might wish that were the case, but wishful thinking does equal law. Unless and until there is some law, that passes judicial scrutiny, which deems these private platforms a public square subject to the same restrictions that is imposed on the government, they absolutely do not have to let you say everything and anything you want. Chances are, this is also explained in their Terms of Service, which you probably didn’t read, but you should.

If you’re going to listen to anyone provide an opinion on Section 230, perhaps one would want to listen to a co-author of the law itself:

Think of it this way, if you are a bar owner and you have a drunk and disorderly guy in you bar that is clearly annoying your other customers, would you want the ability to 86 the person or do you want the government to tell you that as long as you are open to the public you have to let that person stay in your bar even if you risk losing other customers because someone is being obnoxious? Of course you want to be able to bounce that person out! It’s not really any different for platform operators.

So for all of you chanting about how a platforms censorship of your speech on their platform is impacting your freedom of speech – you don’t understand the plain language of the First Amendment. The law is “Congress shall make no law … abridging the freedom of speech…” not “any person or entity shall make no rule abridging the freedom of speech…”, which is what people seem to think the First Amendment says or otherwise wants the law to say.

LET’S KEEP THE CONVERSATION GOING BUT NOT MAKE RASH DECISIONS

Do platforms have the best of both worlds … perhaps.  But what is worse?  The way it is now with Section 230 or what it would be like without Section 230?  Frankly, I choose a world with Section 230.  Without Section 230, the Internet as we know it will change. 

While we’ve never seen what the Internet looks like without Section 230 I can imagine we would go to one of two options: 1) an Internet where platforms are afraid to moderate content and therefore everything and anything would go up, leaving us with a very ugly Internet (because people are unfathomably rude and disgusting – I mean, content moderators have suffered from PTSD for having to look at what nasty humans try to share); or 2) an Internet where platforms are afraid of liability and either UGC sites will cease to exist altogether or they may go to a notice and take down model where as soon a someone sees something they are offended by or otherwise don’t like, they will tell the platform the information is false, defamatory, harassing, etc. and that content would likely automatically come down. The Internet, and public discussion, will be at the whim of a heckler’s veto. You think speech is curtailed now? Just wait until the society of “everyone is offended” gets a hold of it.

As I mentioned to begin with, I don’t think that the Internet is perfect, but neither are humans and neither is life. While I believe there may be some concessions to be had, after in-depth studies and research (after all, we’ve only got some 24 years of data to work with and those first years really don’t count in my book) I think it foolish to be making rash decisions based upon political agendas. If the politicians want their own platform where they aren’t going to be “censored” and the people have ease of access to such information … create one! If people don’t like that platforms like Twitter, Facebook, or Google are censoring content … don’t use them or use them less. Spend your time and money with a platform that more aligns with your desires and beliefs. There isn’t one out there? Well, nothing is stopping you from creating your own version (albeit, I understand that it’s easier said than done … but there are platforms out there trying to make that move). That’s what is great about this country … we have the ability to innovate … we have options … well, at least for now.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Breaking down the DOJ Section 230 Workshop: Stuck in the Middle With You

The current debate over Section 230 of the Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA”) has many feeling a bit like the lyrics from Stealers Wheel – Stuck in The Middle With You, especially the lines where it says “clowns to the left of me, jokers to my right, here I am stuck in the middle with you.” As polarizing as the two extremes of the political spectrum seem to be these days, so are the arguments about Section 230.  Arguably the troubling debate is compounded by politicians who either don’t understand the law, or purposefully make misstatements about the law in attempt to further their own political agenda.

For those who may not be familiar with the Communications Decency Act, in brief, it is federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Platforms that allow third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, TripAdvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. 

So, what’s the debate over?  Essentially the difficult realities about humans and technology.  I doubt there would be argument over the statement that the Internet has come a long way since the early days of CompuServe, Prodigy and AOL. I also believe that there would be little argument that humans are flawed.  Greed was prevalent and atrocities were happening long before the advent of the Internet.  Similarly, technology isn’t perfect either.  If technology were perfect from the start, we wouldn’t ever need updates … version 1.0 would be perfect, all the time, every time.  That isn’t the world that we live in though … and that’s the root of the rub, so to speak.

Since the enactment of the CDA, an abundance of lawsuits have been initiated against platforms, the results of which further defined the breadth of the law.  For those really wanting to learn more and obtain a more historical perspective on how the CDA came to be, one could read Jeff Kosseff’s book called The Twenty Six Words That Created the Internet.  To help better understand some of the current debate over this law which will be discussed shortly, this may be a good opportunity to point out a few of the (generally speaking) practical implications of Section 230:

  1. Unless a platform wholly creates or materially contributes to content on its platform, it will not be held liable for the content created by a third-party.  This immunity from liability has also been extended to other tort theories of liability where it is ultimately found that such theory stems from the third-party content.
  2. The act of filtering content by a platform does not suddenly transform it into a “publisher” aka the person that created the content in the first place, for the purposes of imposing liability.
  3. A platform will not be liable for their decision to keep content up, or take content down, regardless of whether such information may be perceived as harmful (such as content alleged to be defamatory). 
  4. Injunctive relief (such as a take down order from a court) is legally ineffective against a platform if such order relates to content that they would have immunity for.

These four general principals are the result of litigation that ensued against platforms over the past 23+ years. However, a few fairly recent high-profile cases stemming from atrocities, and our current administration (from the President down), has put Section 230 in the crosshairs and desires for another amendment.  The question is, amendment for what?  One side says platforms censor too much, the other side says platforms censor too little, platforms and technology companies are being pressured to  implement stronger data privacy and security for their users worldwide while the U.S. government is complaining about measures being taken are too strong and therefore allegedly hindering their investigations.  Meanwhile the majority of the platforms are singing “stuck in the middle with you” trying to do the best they can for their users with the resources they have, which unless you’re “big Internet or big tech” is typically pretty limited.  And frankly, the Mark Zuckerberg’s of the world don’t speak for all platforms because not all platforms are like Facebook nor do they have the kind of resources that Facebook has.  When it comes to implementation of new rules and regulations, resources matter.

On January 19, 2020 the United States Department of Justice announced that they would be hosting a “Workshop on Section 230 of the Communications Decency Act” on February 19, 2020 in Washington, DC.  The title of the workshop “Section 230 – Nurturing Innovation or Fostering Unaccountability?”  The stated purpose of the event was to “[D]iscuss Section 230 … its expansive interpretation by the courts, its impact on the American people and business community, and whether improvements to the law should be made.”  The title of the workshop was intriguing because it seemed to suggest that the answer was one or the other when the two concepts are not mutually exclusive.

On February 11, 2020 the formal agenda for the workshop (the link to which has since been removed from the government’s website) was released.  The agenda outlined three separate discussion panels:

  • Panel 1:  Litigating Section 230 which was to discuss the history, evolution and current application of Section 230 in private litigation;
  • Panel 2: Addressing Illicit Activity Online which was to discuss whether Section 230 encourages or discourages platforms to address online harms, such as child exploitation, revenge porn, and terrorism, and its impact on law enforcement; and
  • Panel 3: Imagining the Alternative which was to discuss the implications on competition, investment, and speech of Section 230 and proposed changes. 

The panelists were made up of legal scholars, trade associations and a few outside counsel who represent plaintiffs or defendants.  More specifically, the panels were filled with many of the often empaneled Section 230 folks including legal scholars like Eric Goldman, Jeff Kosseff; Kate Klonik, Mary Ann Franks, and staunch anti- Section 230 attorney Carrie Goldberg, a victim’s rights attorney that specializes in sexual privacy violations.  Added to the mix was also Patrick Carome who is famous for his Section 230 litigation work, defending many major platforms and organizations like Twitter, Facebook, Google, Craigslist, AirBnB, Yahoo! and the Internet Association.  Other speakers included Annie McAdams, Benjamin Zipupsky, Doug Peterson, Matt Schruers, Yiota Souras, David Chavern, Neil Chilson, Pam Dixon, and Julie Samuels.

A review of the individual panelist’s bios would likely signal that the government didn’t want to include the actual stakeholders, i.e., representation from any platform’s in-house counsel or in-house policy.  While not discounting the value of the speakers scheduled to be on panel, one may find it odd that those who deal with the matters every day, who represent entities that would be the most impacted by modifications to Section 230, who would be in the best position to determine what is or is not feasible to implement in the terms of changes, if changes to Section 230 were to happen, had no seat at the discussion table.  This observation was wide spread … much discussion on social media about the lack of representation of the true “stakeholders” took place with many opining that it wasn’t likely to be a fair and balanced debate and that this was nothing more than an attempt by U.S. Attorney General William Barr to gather support for the bill relating to punishing platforms/tech companies for implementing end-to-end encryption.  One could opine that the Bill really has less to do with Section 230 and more to do with the Government wanting access to data that platforms may have on a few perpetrators who happen to be using a platform/tech service.

If you aren’t clear on what is being referenced above, it bears mentioning that there is a Bill titled “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019” aka “EARN IT Act of 2019” that was proposed by Senator Lindsey Graham.  This bill came approximately two weeks after Apple was ordered by AG Barr to unlock and decrypt the Pensacola shooter’s iPhone.  When Apple responded that they couldn’t comply with the request, the government was not happy.  An article written by CATO Institute stated that “During a Senate Judiciary hearing on encryption in December Graham issued a warning to Facebook and Apple: ‘this time next year, if we haven’t found a way that you can live with, we will impose our will on you.’”  Given this information, and the agenda topics, the timing of the Section 230 workshop seemed a bit more than coincidence.  In fact, according to an article in Minnesota Lawyer, Professor Eric Goldman pointed out that the “DOJ is in a weird position to be convening a roundtable on a topic that isn’t in their wheelhouse.”

As odd as the whole thing may have seemed, I had the privilege of attending the Section 230 “Workshop”.  I say “workshop” because it was a straight lecture without the opportunity for there to be any meaningful Q&A dialog from the audience.  Speaking of the audience, of the people I had direct contact with, the audience consisted of reporters, internet/tech/first amendment attorneys, in-house counsel/representatives from platforms, industry association representatives, individual business representatives, and law students.  The conversations that I personally had, and personally overheard, was suggestive that the UGC platform industry (the real stakeholders) were all concerned or otherwise curious about what the government was trying to do to the law that shields platforms from liability for UGC.

PANEL OVERVIEW:

After sitting through nearly four hours’ worth of lecture, and even though I felt the discussion to be a bit more well-rounded than I anticipated, I still feel that the entire workshop could be summarized as follows: “humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.”

Perhaps that is a bit of an oversimplification but honestly, if you watch the whole lecture, that’s what it boils down to.

The harms discussed during the different panels included:

  • Libel (brief mention)
  • Sex trafficking (Backpage.com, FOSTA, etc.)
  • Sexual exploitation of children (CSAM)
  • Revenge porn aka Non-Consensual Pornography aka Technology Facilitated Harassment
  • Sale of drugs online (brief mention)
  • Sale of alleged harmful products (brief mention)
  • Product liability theory as applied to platforms (ala Herrik v. Grindr)

PANEL #1:

In traditional fashion, the pro-Section 230 advocates explained the history of the CDA, how it is important to all platforms that allow UGC, not just “big tech” and resonated on the social utility of the Internet … platforms large and small.  However, the anti-Section 230 panelists pointed to mainly harms caused by platforms (though not elaborated on which ones) by not removing sexually related content (though defamation was a short mention in the beginning). 

Ms. Adams seemed to focus on sex trafficking – touching on how once Backpage.com was shut down that a similar close site started up in Amsterdam. She referred to the issues she was speaking about as a “public health crisis.” Of course, Ms. Goldberg raised argument relating to the prominent Herrik v Grindr case wherein she argued a product liability theory as a work around Section 230. That case ended when writ was denied by the U.S. Supreme Court in October of 2019. I’ve heard Ms. Goldberg speak on this case a few times and one thing she continually harps on is the fact that the Grindr didn’t have way to keep Mr. Herrik’s ex from using their website. She seems surprised by this. As someone who represents platforms, it makes perfect sense to me. We must not forget that people can create multiple user profiles, from multiple devices, from multiple IP addresses, around the world. Sorry, Plaintiff attorneys…the platforms’ crystal ball is in the shop on these issues … at least for now. Don’t misunderstand me. I believe Ms. Goldberg is fighting the good fight, and her struggle on behalf of her clients is real! I admire her work and no doubt she sees it with a lens from the trenches she is in. That said, we can’t lose sight of reality of how things actually work versus how we’d like them to work.

PANEL #2:

There was a clear plea from Ms. Franks and Ms. Souras for something to be done about sexual images, including those exploiting children.  I am 100% in agreement that while 46 states have enacted anti “revenge porn” or better termed Non-Consensual Pornography laws, such laws aren’t strong enough because of the malicious intent requirement.  All a perpetrator has to say is “I didn’t mean to harm victim, I did it for entertainment” or another seemingly benign purpose and poof – case closed.”  That struggle is difficult! 

No reasonable person thinks these kinds of things are okay yet there seemed to be an argument that platforms don’t do enough to police and report such content.  The question becomes why is that?  Lack of funding and resources would be my guess…either on the side of the platform OR, quite frankly, on a under-funded/under-resourced government or agency to actually appropriately handle what is reported.  What would be the sense of reporting unless you knew for sure that content was actionable for one, and that the agency it is being reported to would actually do anything about it?

Interestingly, Ms. Souras made the comment that after FOSTA no other sites (like Backpage.com) rose up.  Curiously, that directly contradicted Ms. Adams’s statement about the Amsterdam website popping up after Backpage.com was shut down.  So which is it?  Pro-FOSTA statements also directly contradicts what I’ve heard last October at a workshop put on by ASU’s Project Humanities entitled “Ethics and Intersectionality of the Sext Trade” which covered the complexities of sex trafficking and sex work.  Problems with FOSTA was raised during that workshop.  Quite frankly, I see all flowery statements about FOSTA as nothing more than trying to put lipstick on a pig; trying to make a well-intentioned, emotionally driven, law look like it is working when it isn’t.

Outside of the comments by Ms. Franks and Ms. Souras, AG Doug Peterson out of Nebraska did admit that the industry may self-regulate and sometimes that happens quickly, but he still complained that the state criminal law preemption makes his job more difficult and advocated for an amendment to include state and territory criminal law to the list of exemptions.  While that may sound moderate, the two can be different and arguably such amendment would be overbroad when you are only talking about sexual images.  Further, the inclusion of Mr. Peterson almost seemed as a plug in for a subtle push about how the government allegedly can’t do their job without modification to Section 230 – and I think a part of the was leaning towards, while not making a big mention about it, was the end-to-end encryption debate.  In rebuttal to this notion, Matt Schruers suggested that Section 230 doesn’t need to be amended but that the government needs more resources so they can do a better job with the existing laws, and encouraged tech to work to do better as they can – suggesting efforts from both sides would be helpful

One last important point made during this panel was Kate Klonik making the distinction between the big companies and other sites that are hosting non-consensual pornography.  It is important to keep in mind that different platforms have different economic incentives and that platforms are driven by economics.  I agree with Ms. Klonik that we are in a massive “norm setting” period where we are trying to figure out what to do with things and that we can’t look to tech to fix bad humans (although it can help).  Sometimes to have good things, we have to accept a little bad as the trade-off.

PANEL #3

This last panel was mostly a re-cap of the benefits of Section 230; the struggles that we fact when trying to regulate with a one-size fits all mentality and, I think most of the panelists seem to be agreeing that there needs to be some research done before we go making changes because we don’t want unintended consequences.  That is something I’ve been saying for a while and reiterated during the ABA’s Forum on Communications Law Digital Communications Committee hosted a free CLE titled “Summer School: Content Moderation 101” wherein Jeff Kosseff and I, in a moderated panel by Elisa D’Amico, Partner at K&L Gates, discussed Section 230 and a platform’s struggle with content moderation.  Out of this whole panel, the one speaker that had most people grumbling in the audience was David Chavern who is the President of News Media Alliance.  When speaking about solutions, Mr. Chavern likened Internet platforms to that of traditional media as if he was comparing two oranges and opined that platforms should be liable just like newspapers.  Perhaps he doesn’t understand the difference between first party content and third-party content.  The distinction between the two is huge and therefore I found his commentary to be the least relevant and helpful to the discussion. 

SUMMARY:

In summary, there seem to be a few emotion evoking ills in society (non-consensual pornography, exploitation of children, sex trafficking, physical attacks on victims, fraud, and the drug/opioid crisis) that the government is trying to find methods to solve.  That said, I don’t think amending Section 230 is the way to address that unless and until there is reliable and unbiased data that would suggest that the cure won’t be worse than the disease. Are the ills being discussed really prevalent, or do we just think they are because they are being pushed out through information channels on a 24-hour news/information cycle?

Indeed, reasonable minds would agree that we, as a society, should try and stop harms where we can, but we also have to stop regulating based upon emotions.  We saw that with FOSTA and arguably, it has made things more difficult on law enforcement, victims alike and has had unintended consequences, including chilling speech, on others.  You simply cannot regulate the hate out of the hearts and minds of humans and you cannot expect technology to solve such a problem either.  Nevertheless, that seems to be the position of many of the critics of Section 230.

For more reading and additional perspectives on the DOJ Section 230 Workshop, check out these additional links:

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

The Supreme Court of the United States Denies Petition to Review the California Supreme Court’s Decision in Hassell v. Bird

Another win for Section 230 advocates. Back in July I wrote a blog post entitled “Section 230 is alive and well in California (for now) | Hassell v. Bird” which outlined the hotly contested, and widely watched, case that started back in 2014. When I wrote that post I left off saying that “the big question is where will things go from [there].” After all, we have seen, and continue to see, Section 230 come under attack for a host of arguably noble, yet not clearly thought through, reasons including sex-trafficking (resulting in FOSTA).

Many of us practitioners weren’t sure Hassell, after losing her case before the California Supreme Court as it pertained to Yelp, Inc., would actually appeal the matter to the U.S. Supreme Court. This was based upon the fact that: there has been a long line of cases across the country that have held that 47 U.S.C. § 230(c)(1) bars injunctive relief and other forms of liability against Internet publishers for third-party speech; that the U.S. Supreme Court denied another similar petition in not so distant past; and held many years prior, in Zenith Radio Corp. v. Hazeltine Research, Inc., 395 U.S. 100 (1969) that “[o]ne is not bound by a judgment in personam resulting from litigation in which he is not designated as a party or to which he has not been been made a party by service of process.”  

Clearly undeterred, in October of 2018 Hassell filed a Petition for Writ of Certiorari and accompanying Appendix, challenging the California Supreme Court’s ruling, with the U.S. Supreme Court. Respondent, Yelp, Inc. filed its Opposition to the Petition for Writ of Certiorari in December of 2018. Hassell’s Reply in Support of the Petition was filed earlier this month and all the materials were distributed to the Justices to be discussed at the conference scheduled to be held on Friday, January 18, 2019. [I suppose it is good to see that the government shutdown didn’t kick this matter down the road.]

Many of us Section 230 advocates were waiting to see if the U.S. Supreme Court would surprise us by granting the Petition. Nevertheless, based upon today’s decision denying Hassell’s Petition it appears that this matter will, if ever, be reserved for another day and all is status quo with Section 230, for now.

Citations: Hassell v. Bird, 420 P.3d 776, 2018 WL 3213933 (Cal. Sup. Ct. July 2, 2018), cert. denied; Hassell v. Yelp, 2019 WL 271967 (U.S. Jan. 22, 2019)(No. 18- 506)

The Ugly Side of Reputation Management: What Attorneys and Judges Need to Know

Once upon a time, not so long ago, there was no such thing as the Internet.  Information and news came from your local newspaper, television, or radio channel.  Research was done in good old fashioned books, often at your local school, university or public library.  If the content you were seeking was “old” chances are you had to go look at microfiche. For those that are young enough to have no clue what I’m talking about, watch this video. Then BOOM! Along came the internet! Well, sort of.  It was a slow work in progress, but by 1995 the internet was fully commercialized here in the U.S.  Anyone else remember that horrible dial up sound followed by the coolest thing you ever heard in your life “You’ve got mail!“?

As technology and the internet evolved so did the ease of gathering and sharing information; not only by the traditional media, but by every day users of the internet.  I’ve dedicated an entire series of blogs called Fighting Fair on the Internet just to the topic of people’s online use.  Not every person who has access to the internet publishes flattering content (hello Free Speech) nor do they necessarily post truthful content (ewww, defamation).  Of course, not all unflattering content is defamatory, so it’s not illegal to be a crap talker, but some people try to overcome it anyway.  Either way, whether the information is true or false, such content has brought about a whole new industry for people and businesses looking for relief: reputation management.

Leave it to the entrepreneurial types to see a problem and find a lucrative solution to the same.  While there are always legitimate ethical reputation management companies and lawyers out there doing business the right way (and kudos to all of them)…there are those that are, shall we say, operating through more “questionable” means.  Those that want to push the ethical envelope often come up with “proprietary” methods to help clients which are often sold as removal or internet de-listing/de-indexing techniques that may include questionable defamation cases and court orders, use of bogus DMCA take down notices, or “black hat” methods.  In this article I am only going to focus on the questionable defamation cases that result in an order for injunctive relief.

BACKGROUND: QUESTIONABLE DEFAMATION CASES AND COURT ORDERS

UCLA Professor, Eugene Volokh and Public Citizen litigation attorney, Paul Alan Levy, started shedding public light on concerns relating to questionable court orders a few years ago.  In an amicus brief, submit to the California Supreme Court in support of Yelp, Inc. in Hassell v. BirdVolokh offered his findings to the court discussing how default proceedings are “far too vulnerable to manipulation to be trustworthy.”

As the brief says:

Injunctions aimed at removing or deindexing allegedly libelous material are a big practice area, and big business….But this process appears to be rife with fraud and with other behavior that renders it inaccurate. And this is unsurprising, precisely because many such injunctions are aimed at getting action from third parties (such as Yelp or Google) that did not appear in the original proceedings. The adversarial process usually offers some assurance of accurate fact finding, because the defendant has the opportunity and incentive to point out the plaintiff’s misstatements. But many of the injunctions in such cases are gotten through default judgments or stipulations, with no meaningful adversarial participation.

The brief further pointed to seven (7) different methods that plaintiffs were using to obtain default judgments:

(1) injunctions gotten in lawsuits brought against apparently fake defendants;

(2) injunctions gotten using fake notarizations;

(3) injunctions gotten in lawsuits brought against defendants who very likely did not author the supposedly defamatory material;

(4) injunctions that seek the deindexing of official and clearly nonlibelous government documents – with no notice to the documents’ authors – often listed in the middle of a long list of website addresses submitted to a judge as part of a default judgment;

(5) injunctions that seek the deindexing of otherwise apparently truthful mainstream articles from websites like CNN, based on defamatory comments that the plaintiffs or the plaintiffs’ agents may have posted themselves, precisely to have an excuse to deindex the article;

(6) injunctions that seek the deindexing of an entire mainstream media article based on the source’s supposedly recanting a quote, with no real determination of whether the source was lying earlier, when the article was written, or is lying now, prompted by the lawsuit;

(7) over 40 “injunctions” sent to online service providers that appear to be outright forgeries.

Well, isn’t that fun?  Months after the brief was filed in Hassell, Volokh published another article with the title “Solvera Group, accused by Texas AG of masterminding fake-defendant lawsuits, now being sued by Consumer Opinion over California lawsuits.”  What was clear from all of this is that website owners who have been victims of the scheme are likely watching and the authorities are too.  The US Attorney Generals office in the District of Rhode Island and the State of Texas both took interest in these situations…and I suppose it is possible that more will be uncovered as time goes on.

So how are these parties getting away with this stuff?  With the help of unscrupulous reputation management companies, associated defamation attorneys…and, unfortunately, trusting judges.  Some judges have taken steps to correct the problem once the issue was brought to their attention.  As for the attorneys involved, you have to wonder if they were actually “duped” as this Forbes article mentions or do they know what they are doing?  Either way, it’s not a good situation.  This isn’t to necessarily say that every attorney that is questioned about this stuff is necessarily guilty of perpetrating a fraud upon the court or anything like that.  However, it should serve as a cautionary warning that this stuff is real, these schemes are real, clients can be really convincing, and if one isn’t careful and fails to conduct appropriate and precautionary due diligence on a client and/or the documents provided to you by a client…it could easily be a slippery slope into Padora’s box.   After all, no one wants to be investigated by their state bar association (or worse) for being involved with this kind of mess.

Yes, there have been lots of great articles and discussion shedding light on the subject but the question then becomes, how do you tell the difference between a legitimate situation and a questionable situation?  The answer: recognize red flags and question everything.

RED FLAGS THAT SHOULD CAUSE YOU PAUSE

In December of 2016 I had the pleasure of traveling to Miami, FL for the Internet Lawyer Leadership Summit conference to present, for CLE, on multiple topics including this subject.  At that time I provided the group with some “red flags” based upon information I had then.  Since that time I have gained an even greater knowledge base on this subject simply by paying attention to industry issues and reading, a lot.  I have now compiled the following list of cautionary flags with some general examples, and practical advice that, at minimum, should have you asking a few more questions:

RED FLAGS FOR ATTORNEYS

  • If the entity or person feeding you the “lead” is in the reputation management industry.  You want to do some due diligence.  You could be dealing with a total above board individual or entity , and the lead may be 100% legit, BUT the industry seems to consist of multiple “companies” that often lead back to the same individual(s) and just because they are well known doesn’t necessarily mean they are operating above board.  Do your homework before you agree to be funneled any leads.
  • If the client is asking you to make some unusual adjustments to your fee agreement.  Your fee agreement is likely pretty static.  If the client is requesting some unusual adjustments to your agreement that make you feel uncomfortable, you might want to decline representation.
  • If the client already has “all of the documents” and you don’t actually deal with the defendant. We all want to trust our clients, but as some counsel already experienced, just accepting what your client tells you and/or provides you as gospel without a second thought can land you in hot water.  Consider asking to meet the defendant in person or have them appear before a person licensed to give an oath and check identification, such as a notary public of YOUR choosing to ensure the defendant is real and that the testimony that they are giving in the declaration or affidavit is real.  You want to make sure everything adds up and communication by telephone or email may not protect you enough.  When it comes to documents provided by the client, or the alleged post author, watch for the following:
    • Ensure that the address listed on any affidavit or other document isn’t completely bogus.  Run a search on Google – is it even a real address?  For all you know you could be getting an address to the local train track.
    • Ensure that any notary stamp on an affidavit is inconsistent with where the affiant purports to live. It will rarely make sense for an affiant list their address as, for example, Plains, New York but the notary stamp suggests the notary is based out of Sacramento, California. It will make even less sense if the affiant supposedly lives out of country, but is being notarized by a notary in the states.
    • Ensure that the notary is actually a real notary.  You can typically find record of notaries with the Secretary of State that the notary is in.  Make sure they are a real person.  If you really want to be sure that they actually signed your document, and that it wasn’t “lifted” from elsewhere (yay technology) check in with the notary and/or see if their records are on file somewhere publicly that you can check.
  • If the entity alleged to be the plaintiff isn’t actually a real entity in the state that they are purporting in the complaint to be from.  If the plaintiff is supposed to be ABC Ventures, LLC out of San Diego, California, there should be a record of ABC Ventures, LLC actually listed, and active, on the California Corporation Commission website.  The people that you are talking to also should, in theory, be the members/managers of such entity too.  For example, if you are always talking to a “secretary” you might want to insist on a more direct contact.
  • If the person or entity listed to be the plaintiff isn’t actually listed in the subject URL in the complaint.  If a plaintiff is going to bring a case, they should at least have standing to do so.  You should be cautious of any plaintiffs that aren’t actually at issue or fails to have a valid direct connection that would give them standing to bring the claim.
  • If the subject post doesn’t contain any defamatory statements in the first place.  Just because a post isn’t flattering doesn’t mean that it is actually defamatory.  Similarly, public documents aren’t typically seen as defamatory either. Who is saying it is false? Why is the statement false? What evidence supports the allegation that it is false?  
  • If the subject posting is outside of the statute of limitations for bringing claims in the state in which you intend on filing.  Now I know that some may disagree with me, and there may be bar opinions in different states that suggest otherwise, however, if you are presented with a post that is outside of the statute of limitations to bring a claim for defamation, subject to the single publication rule, and there is no real reason for tolling (like it was held in a secret document not generally public – which pretty much excludes the items on the internet) that may be of concern to you.  I wrote before on why statute of limitations is important, especially if you are the type to follow ABA’s Model Rules of Professional Conduct, Rule 3.1.  Even here in Arizona the bar has raised in disciplinary proceedings, in connection with other infractions, concerns about bringing claims outside of the statute of limitations, citing a violation of ER 8.4(d).  See generally, In re Aubuchon233 Ariz. 62 (Ariz. 2013).
  • If a case was filed in a wholly separate state from the Plaintiff and Defendant and you are asked to be “local counsel” to marshal documents to court or simply to submit it to a search engine like Google.  It is not improbable that local counsel will be called to assist with basic filings or to submit an order to Google.  It may be possible that such documents contain questionable materials.  It’s always a good idea to review the materials and give it a heightened level of scrutiny before just marshaling them off to the court or search engine.  This is especially true if the Plaintiff is no longer associated with prior counsel and is just looking for a different lawyer to help with this “one thing” as if a submission from an attorney bears more weight that anyone else submitting it.
  • If the plaintiff claims to already know who the author of a subject alleged defamatory post is, yet the post itself is anonymous.  Yes, it is possible that based on an author’s content, and how much detail is placed in such post, that one might be able to figure out who the author is. However, in my experience, many authors tend to write just vague enough to keep themselves anonymous.  If that is the case, without a subpoena to the content host, how does one actually know who the author is?  Some states like Arizona have specific notice requirements for subpoenas that are seeing identifying user information which require notice being posted in the same manner, through the same medium, in which the subject posting was made.  If a notice isn’t present on the website, there likely wasn’t a subpoena (assuming the website requires strict compliance with the law). Mobilisa, Inc. v. Doe, 170 P.3d 712, 217 Ariz. 103 (Ariz. App., 2007).
  • If the case was settled in RECORD TIME.  Often these matters are being “resolved” within a few weeks to only a couple months.  As most of us know, the wheels of justice are SLOW.
  • If the case is settled without any answers or discovery being done.  This goes to my prior point about knowing who the real author is, or, for that matter, that the allegations in a subject post are even false.
  • If notice about the case was not personally served by a process server.  Many states allow certified mailing for service.  Do you really know who is signing that little green form and accepting service?  Was some random person paid to sign that?

RED FLAGS FOR JUDGES (Consider all of the above generally plus the following)

  • If a Complaint is filed and shortly thereafter a stipulated judgment is presented requesting injunctive relief without the defendant ever actually making an appearance.  This seems to be one of the more popular tactics.  A way to curb this kind of abuse would be to hold a hearing where all parties must appear, in person (especially the named defendant signing the stipulation) before the court before any such injunctive order is signed and entered.
  • If an attorney files an affidavit of making a good faith attempt in order to locate the defendant but discovery was never conducted upon the hosting website.  Many sites will respond to discovery so long as their state laws for obtaining such information (like Arizona’s Mobilisa case) is followed.  Arguably, it is disingenuous for an attorney to say they have tried when they really haven’t.  Chances are, the real author may not even know about the case and entering a default judgment under such circumstances deprives them of the opportunity to appear and defend against the matter.
  •  If you order the parties to appear and then suddenly the case gets dismissed.  It thwarts the scheme when the court requests the parties to appear.  If this happens, in a defamation related case, it could be seen as a red flag.  The plaintiff may very well try to dismiss the action and simply refile under a different plaintiff and defendant name but for the same URL that was originally filed in the prior dismissed action.
  • If the order for injunctive relief contains URLs that were not originally part of the Complaint.  Sneaky plaintiffs and their counsel may attempt to include other postings, from the same or different websites, that are not really at issue and/or that were arguably written by other individuals.  Make sure that the URLs listed on the order are all the same as what is listed on the complaint.
  • If the complaint contains a host of posts, with wide range of dates, and the syntax of the posts are different yet the plaintiff claims that it was written by the same person.  In my experience, very rarely (though it does happen) will one person go on a binge and write a bunch of different posts about one person or entity.  There are typically more than one author involved so if any statement to the alternative should raise a red flag.

Some journalists that have been tracking these kinds of matters think that these schemes may be nearing an end.  I would like to think so, however, in my opinion these problems are far from over unless unsuspecting attorneys, judges, and even websites and search engines get a little more cautious about how they process these court orders for content removal, especially if they are older orders.  I have already discussed why I thought search engine de-indexing isn’t necessarily a viable reputation management solution and in part that is because, arguably, at least for now, Section 230 of the Communications Decency Act  bars injunctive relief, i.e., there is no obligation for websites to remove content anyway.  If a platform or search engine decides to remove content or otherwise de-index content, at least here in the U.S., they are doing so based upon their own company policy…not some legal duty.

In a perfect world none of these issues would exist. Unfortunately, that’s not the world we live in and the best we can do is be vigilant. Hopefully, through this article, I have provided some food for thought for attorneys and judges alike. You never know when such a situation will arise.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon.  All legal questions should be directed to a licensed attorney in your jurisdiction.