Newsweek’s 12(b)(6) failed in defamation case – Boone v. Newsweek

I have never understood the point of news publications including images in their articles/publications that aren’t actually story or incident related. This is especially true when they are reporting critically about specific individuals. I get it, images help with clicks and drawing attention but that’s what stock photos or massively cropped photos are for. Nevertheless, according to the statements in the Court’s opinion, Newsweek made a decision to do just that which resulted in, without surprise, a defamation and false light case against them.

The below information is based upon the information provided in the court opinion. I have no independent knowledge about the facts of this case.

Plaintiff: James Boone

Defendant: Newsweek, LLC, et al. (related entities)

HIGH LEVEL OVERVIEW

Newsweek, an online news organization, published a story about a a police officer who was accused of racially profiling a man in a restaurant. Rather than posting an image of the officer that was being accused of the profiling, Newsweek chose to embed a photo of a police officer, who was apparently identifiable by partial face, nametag and badge number, that had nothing to do with the headline and story. Allegedly, as a consequence, Boone and his family received texts, emails and messages via social media inquiring about the article under the impression that he was involved resulting in Boone having to seek police protection. Boone’s lawyer wrote to Newsweek alerting them to the issue and asked that “appropriate measures [be taken] to mitigate the harm.” For whatever reason, Newsweek apparently didn’t respond. Consequently, Boone filed a lawsuit against Newsweek for defamation and false light in the United States District Court, Eastern District of Pennsylvania.

Newsweek then filed a motion to dismiss, under Fed.R.Civ.P., Rule 12(b)(6) [failure to state a claim] arguing that Boone failed to plead enough facts to support a reasonable inference that Newsweek acted with “actual malice.”

A LITTLE INTO THE LEGAL WEEDS

In this particular instance, Boone is considered a public-figure. “To prevail on a defamation case, the First Amendment requires that public-figure plaintiffs plead-and later, prove, that the defendant acted with ‘actual malice.'” Contrary to most lay persons belief, and as the court explains “‘[a]ctual malice’ is a term of art that does not connotate ill will or improper motivation” rather it means that the “publisher acted ‘with knowledge that [the allegedly defamatory statement] was a false or with reckless disregard of whether it was false or not.” When breaking it down further, the term “reckless disregard” means “that the defendant in fact entertained serious doubts as to the truth of the statement or that the defendant had a subjective awareness of probable falsity.”

The “actual malice” standard is a pretty high bar to recovery. It can be even higher when you’re considering, as here, not a direct false statement but when there is alleged defamation by implication. In this instance “[Boone] has to show that [Newsweek] “either intended to communicate the defamatory meaning or knew of the defamatory meaning and was reckless in regard to it.” This inquiry is subjective in nature which requires that “some evidence showing, directly or circumstantially, that the defendants themselves understood the potential defamatory meaning of their statement.” Obviously, false implications are capable of being defamatory.

Here Boone would need to prove that “Newsweek knew the implication that Boone was involved in [the subject incident] or was reckless about that falsity” and that “Newsweek either intended to convey the false impression that Boone was involved in [the subject incident], or knew that publishing the photograph would likely convey the false impression that Boone was involved in [the subject incident] but recklessly published it anyway.” While the Court discusses some other things, the Court here took issue with the fact that Boone’s badge number and name tag were visible in the photograph. The Court stated:

“The fact that the photograph depicted Boone’s nametag and badge number therefore gives rise to a reasonable inference that Newsweek (1) knew that Boone was not involved in the [subject] incident or acted in reckless disregard of that fact, and (2) knew that publishing the photograph would likely convey the false impression that Boone was involved in the [subject] incident but recklessly published it anyway.”

Because there was a “reasonable inference that Newsweek acted with actual malice” the Court denied the motion to dismiss the defamation claim. With respect to the false light claim, which also requires the finding of actual malice, the Court similarly denied the motion to dismiss.

FINAL THOUGHTS

Defamation litigation can be part of the “cost of doing business” when you are in the news publication business. That said, this was, in my opinion, easily avoidable. I’m not sure if there was a failure to have full legal review done before the publication was published, or if someone didn’t take the demand letter from Boone’s attorney seriously … but Newsweek, with the limited information we’re presented with anyway, appears to have had two opportunities to avoid this litigation and they didn’t take advantage of either. The first would have been to train reporting and editing staff to not use unrelated images, especially of identifiable people, in news reporting efforts. This should be a no-brainer, but given how often I see that happen in news publications, I’m not surprised. The second would have been to acknowledge the mistake and simply change out the picture to something more appropriate … you know, like an image of the actual officer being accused in the article … when they received notice that there was an issue. Doubling down on something like this seems like an unnecessary risk … that has now resulted in costly litigation. Maybe Newsweek has a huge litigation budget … but even then, you’d think that they’d want to use it a little more wisely.

Citation: Boone v. Newsweek, LLC, Case No. 22-1601, E.D. Pa, Feb. 27, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

Advertisement

NY District Court Swings a Bat at “The Hateful Conduct Law” – Volokh v. James

This February14th (2023), Valentine’s Day, the NY Federal District Court showed no love for New York’s Hateful Conduct Law when it granted a preliminary injunction to halt it. So this is, to me, an exceptionally fun case because it includes not only the First Amendment (to the United States Constitution) but also Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’m also intrigued because renowned Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc. are the Plaintiffs. If Professor Volokh is involved, it’s likely to be an interesting argument. The information about the case below has been pulled from the Court Opinion and various linked websites.

Plaintiffs: Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc.

Defendant: Letitia James, in her official capacity as New York Attorney General

Case No.: 22-cv-10195 (ALC)

The Honorable Andrew L. Carter, Jr. started the opinion with the following powerful quote:

 “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’”

Matal v. Tam, 137 S.Ct. 1744, 1764 (2017) 

Before we get into what happened, it’s worth taking a moment to explain who the Plaintiffs in the case are. Eugene Volokh (“Volokh”) is a renowned First Amendment law professor at UCLA. In addition, Volokh is the co-owner and operator of the popular legal blog known as the Volokh Conspiracy. Rumble, operates a website similar to YouTube which allows third-party independent creators to upload and share video content. Rumble sets itself apart from other similar platforms because it has a “free speech purpose” and it’s “mission [is] ‘to protect a free and open internet’ and to ‘create technologies that are immune to cancel culture.” Locals Technology, Inc. (“Locals”) is a subsidiary of Rumble and also operates a website that allows third party-content to be shared among paid, and unpaid, subscribers. Similar to Rumble, Locals also reports having a “pro-fee speech purpose” and a “mission of being ‘committed to fostering a community that is safe, respectful, and dedicated to the free exchange of ideas.” Suffice it to say, the Plaintiffs are no stranger to the First Amendment or Section 230. So how did these parties become Plaintiffs? New York tried to pass a well intentioned, but arguably unconstitutional, law that could very well negatively impact them.

On May 14th last year, 2022, some random racist nut job used Twitch (a social media site) to livestream himself carrying out a mass shooting on shoppers at a grocery store in Buffalo, New York. This disgusting act of violence left 10 people dead and three people wounded. As with most atrocities, and with what I call the “train wreck effect”, this video went viral on various other social media platforms. In response to the atrocity New York’s Governor Kathy Hochul kicked the matter over to the Attorney General’s Office for investigation with an apparent instruction to focus on “the specific online platforms that were used to broadcast and amplify the acts and intentions of the mass shooting” and directed the Attorney General’s Office to “investigate various online platforms for ‘civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” Apparently the Governor hasn’t heard about Section 230, but I’ll get to that in a minute. After investigation, the Attorney General’s Office released a report, and later a press release, that stated “[o]nline platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms” because an alleged “lack of oversight, transparency, and accountability of these platforms allows hateful and extremist views to proliferate online.” This is where one, having any knowledge about this area of law, should insert the facepalm emoji. If you aren’t familiar with this area of law, this will help explain (a little – we’re trying to keep this from being a dissertation).

Now no reasonable person will disagree that this event was tragic and disgusting. Humans are weird beings and for whatever reason (though I suspect a deep dive into psychology would provide some insight), we cannot look away from a train wreck. We’re drawn to it like a moth to a flame. Just look at any news organization and what is shared. You can’t tell me that’s not filled with “train wreck” information. Don Henley said it best in his lyrics in the 1982 song Dirty Laundry, talking about the news: “she can tell you about the plane crash with a gleam in her eye” … “it’s interesting when people die, give us dirty laundry”. A Google search for the song lyrics will give you full context if you’re not a Don Henley fan … but even 40 plus years later, this is still a truth.

In effort to combat the perceived harms from the atrocity that went viral, New York, on December 3, 2022 enacted The Hateful Conduct Law, entitled “Social media networks; hateful conduct prohibited.” What in the world does that mean? Well, the law applies to “social medial networks” and defined “hateful conduct” as: “[T]he use of a social media network to vilify, humiliate, incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). Okay, but still ..

In explaining The Hateful Conduct Law, and as the Court’s opinion (with citations omitted) explains:

[T]he Hateful Conduct Law requires that social media networks create a complaint mechanism for three types of “conduct”: (1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence. This “conduct” falls within the law’s definition if it is aimed at an individual or group based on their “race”, “color”, “religion”, “ethnicity”, “national origin”, “disability”, “sex”, “sexual” orientation”, “gender identity” or “gender expression”.

The Hateful Conduct Law has two main requirements: (1) a mechanism for social media users to file complaints about instances of “hateful conduct” and (2) disclosure of the social media network’s policy for how it will respond to any such complaints. First, the law requires a social media network to “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” This mechanism must “be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website. . . .” and must “allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.” N.Y. Gen. Bus. Law § 394-ccc(2).

Second, a social media network must “have a clear and concise policy readily available and accessible on their website and application. . . ” N.Y. Gen. Bus. Law § 394-ccc(3). This policy must “include how such social media network will respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3).

The law also empowers the Attorney General to investigate violations of the law and provides for civil penalties for social media networks which “knowingly fail to comply” with the requirements. N.Y. Gen. Bus. Law § 394-ccc(5).

Naturally this raised a lot of questions. How far reaching is this law? Who and what counts as a “social media network”? What persons or entities would be impacted? Who decides what is “hateful conduct”? Does the government have the authority to try and regulate speech in this way?

Two days before the law was to go into effect, on December 1, 2022, the instant action was commenced by the Plaintiffs alleging both facially, and as-applied, challenges to The Hateful Conduct Law. Plaintiffs argued that the law “violates the First Amendment because it: (1) is a content viewpoint-based regulation of speech; (2) is overbroad; and (3) is void for vagueness. Plaintiffs also alleged that the law is preempted by” Section 230 of the Communications Decency Act.

For the full discussion and analysis on the First Amendment arguments, it’s best to review the full opinion, however, the Court’s opinion opened with the following summary of its position (about the First Amendment as applied to the law):

“With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.”

With respect to the preemption argument made by Plaintiffs, that is that Section 230 of the Communications Decency Act preempts the law because it imposes liability on websites by treating them as publishers. As the Court outlines (some citations to cases omitted):

The Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). The Act has an express preemption provision which states that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3).

As compared to the section of the Opinion regarding the First Amendment, the Court gives very little analysis on the Section 230 preemption claim beyond making the following statements:

“A plain reading of the Hateful Conduct Law shows that Plaintiffs’ argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of “hateful conduct” and for failure to disclose their policy on how they will respond to complaints. N.Y. Gen. Bus. Law § 394-ccc(5). The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.” (emphasis added)

Hold up sparkles. So the Court recognizes the fact that platforms cannot be held liable (in these instances anyway) for third-party content, no matter how ugly that content might be, but yet wants to force (punish in my opinion) a platform by forcing them to spend big money on development to create all these content reporting mechanisms, and set transparency policies, for content that they actually have no legal requirement to remove? How does this law make sense in the first place? What is the point (besides trying to trap them into having a policy that if they don’t follow could give rise to an action for unfair or deceptive advertising)? This doesn’t encourage moderation. In fact, I’d argue that it does the opposite and encourages a website to say “we don’t do anything about speech that someone claims to be harmful because we don’t want liability for failing to do so if we miss something.” In my mind, this is a punishment, based upon third-party content. You don’t need a “reporting mechanism” for content that people aren’t likely to find offensive (like cute cat videos). To this end, I can see why Plaintiffs raised a Section 230 preemption argument … because if you drill it down, the law is still trying to force websites to take an action to deal with undesirable third-party content (and then punish them if they don’t follow whatever their policy is). In my view, it’s an attempt to do an end run around Section 230. The root issue is still undesirable third-party content. Consequently, I’m not sure I agree with the Court’s position here. I don’t think the court drilled down enough to the root of the issue.

Either way, the Court did, as explained in the beginning, grant Plaintiff’s Motion for Preliminary Injunction (based upon the First Amendment arguments) which, at current, prohibits New York from trying to enforce the law.

Citation: Volokh v. James, Case No. 22-cv-10195 (ALC) (S.D.N.Y., Feb. 14, 2023)

DISCLOSURE: This is not mean to be legal advice nor should it be relied upon as such.

Pro Se’s kitchen sink approach results in a loss – Lloyd v. Facebook

The “kitchen sink approach” isn’t an uncommon complaint claim strategy when it comes to filing lawsuits against platforms. Notwithstanding decades of precedent clearly indicating that such efforts are doomed to fail, plaintiffs still give it the ole’ college try. Ironically, and while this makes more sense with pro se plaintiffs because they don’t have the same legal training and understanding of how to research case law, pro se plaintiffs aren’t the only ones who try it … no matter how many times they lose. Indeed, even some lawyers like to get paid to make losing arguments. [Insert the hands up shrug emoji here].

Plaintiff: Susan Lloyd

Defendants: Facebook, Inc.; Meta Platforms, Inc.; Mark Zuckerberg (collectively, “Defendants”)

In this instance Plaintiff is a resident of Pennsylvania who suffers from “severe vision issues”. As such, she qualified as “disabled” under the Americans with Disabilities Act (“ADA”). Ms. Lloyd, like approximately 266 million other Americans, uses the Facebook social media platform, which as my readers likely know, is connected to, among other things, third-party advertisements.

While the full case history isn’t recited in the Court’s short opinion, it’s worth while to point out (it appears anyway with the limited record before me at this time) that the Plaintiff was afforded the opportunity to amend her complaint multiple times as the Court cites to the Third Amended Complaint (“TAC”). According to the Court Order, the TAC alleged claims violations of:

Plaintiff alleged problems with the platform – suggesting it inaccessible to disabled individuals with no arms or problems with vision (and itemized a laundry list of issues that I won’t cite here … but suffice it to say that there was a complaint about the font size not being able to be made larger). [SIDE NOTE: For those that are unaware, website accessibility is a thing, and plaintiffs can, and will, try to hold website operators (of all types, not just big ones like Facebook) accountable if they deem there to be an accessibility issue. If you want to learn a little more, you can read information that is put out on the Beebe Law website regarding ADA Website Compliance.]

Plaintiff alleged that the advertisements on Facebook were tracking her without her permission … except that users agree to Facebook’s Terms of Service (which presumably allow for that since the court brought it up). I’m not sure at what point people will realize that if you are using something for free, you ARE the product. Indeed, there are many new privacy laws being put into place throughout various states (e.g., California, Colorado, Utah, Virginia and Connecticut) but chances are, especially with large multi-national platforms, they are on top of the rules and are ensuring their compliance. If you aren’t checking your privacy settings, or blocking tracking pixels, etc., at some point that’s going to be on you. Technology gives folks ways to opt out – if you can locate it. I realize that sometimes these things can be hard to find – but often a search on Google will land you results – or just ask any late teen early 20s person. They seem to have a solid command on stuff like this these days.

Plaintiff also alleged that Defendants allowed “over 500 people to harass and bully Plaintiff on Facebook.” The alleged allegations of threats by the other users are rather disturbing and won’t be repeated here (though you can review the case for the quotes). However, Plaintiff stated that each time that she reported the harassment she, and others, were told that it didn’t violate community standards. There is more to the story where things have allegedly escalated off-line. The situation complained about, if true, is quite unsettling … and anyone with decency would be sympathetic to Plaintiff’s concerns.

[SIDE NOTE: Not to suggest that I’m suggesting what happened, if true, wasn’t something that should be looked at and addressed for the future. I’m well aware that Facebook (along with other social media) have imperfect systems. Things that shouldn’t be blocked are blocked. for example, I’ve seen images of positive quotes and peanut butter cookies be blocked or covered from initial viewing as “sensitive”. On the other hand, I’ve also seen things that (subjectively speaking but as someone who spent nearly a decade handling content moderation escalations) should be blocked, that aren’t. Like clearly spammy or scammer accounts. We all know them when we see them yet they remain even after reporting them. I’ve been frustrated by the system myself … and know well both sides of that argument. Nevertheless, if one was to take into account the sheer volume of posts and things that come in you’d realize that it’s a modern miracle that they have any system for trying to deal with such issues at all. Content moderation at scale is incredibly difficult.]

Notwithstanding the arguments offered, the court was quick to procedurally dismiss all but the breach of contract claim because the claims were already dismissed prior (Plaintiff apparently re-plead the same causes of action). More specifically, the court dismissed the ADA and Rehabilitation claim because (at least under the 9th Cir.) Facebook is not a place of public accommodation under Federal Law. [SIDE NOTE: there is a pretty deep split in the circuits on this point – so this isn’t necessarily a “get out of jail free” card if one is a website operator – especially if one may be availing themselves to the jurisdiction of another circuit that wouldn’t be so favorable. Again, if you’re curious about ADA Website Compliance, check out the Beebe Law website]. Similarly, Plaintiff’s Unruh Act claim failed because the act doesn’t apply to digital-only website such as Facebook. Plaintiff’s fraud and intentional misrepresentation claims failed because there wasn’t really any proof that Facebook intended to defraud Plaintiff and only the Terms of Service were talked about. So naturally, if you can’t back up the claims, it ends up being a wasted argument. Maybe not so clear for Pro Se litigants, but this should be pretty clear to lawyers (still doesn’t keep them from trying). Plaintiff’s claims for invasion of privacy, negligence, and negligent infliction of emotional distress failed because they are barred by Section 230 of the Communications Decency Act, 47 U.S.C. § 230. Again, this is another one of those situations where decades of precedent contrary to a plaintiff’s position isn’t a deterrent from trying to advance such claims anyway. Lastly, the claims against Zuckerberg were dismissed because Plaintiff didn’t allege that he was personally involved or directed the challenged acts (i.e., he isn’t an “alter ego”).

This left the breach of contract claim. Defendants in this case argued that Plaintiff’s claim for breach of contract should be dismissed because the Court lacks diversity jurisdiction over the claim because she cannot meet the amount in controversy. As the Court explains, “28 U.S.C. §1332 grants federal courts’ original jurisdiction over civil actions where the amount in controversy exceeds $75,000 and the parties are citizens of different states.” Indeed, they are parties are from different states, however, that requirement that the amount in controversy is to exceed $75,000 is where Plaintiff met an impossible hurdle. As discussed prior, users of Facebook all agree to Facebook’s Terms of Service. Here, Plaintiff’s claim for breach of contract is based on conduct of third-party users and Facebook’s Terms of Service disclaim all liability for third-party conduct. Further, the TOS also provide, “aggregate liability arising out of.. .the [TOS] will not exceed the greater of $100 or the amount Plaintiff has paid Meta in the past twelve months.” Facebook having been around the block a time or two with litigation have definitely refined their TOS over the years to make it nearly impenetrable. I mean, never say never, BUT…good luck. Lastly, the TOS precludes damages for “lost profits, revenues, information, or data, or consequential, special indirect, exemplary, punitive, or incidental damages.” Based upon all of these issues, there is no legal way that Plaintiff could meet the required amount in controversy of $75,000. The Court dismissed the final remaining claim, breach of contract, without leave to amend, although the court did add in “[t]he Court expresses no opinion on whether Plaintiff may pursue her contract claim in state court.” One might construe that as a sympathetic signal to the Plaintiff (or other future Plaintiffs)…

There are a few takeaways from this case, in my opinion:

  1. Throwing garden variety kitchen sink claims at platforms, especially ones the size of Facebook, is likely to be a waste of ink on paper on top of the time it takes to even put the ink on the paper in the first place. If you have concerns about issues with a platform, engage the services of an Internet lawyer in your area that understands all of these things.
  2. Properly drafted, and accepted, Terms of Service for your website can be a huge shield from liability. This is why copying and pasting from some random whatever site or using a “one-size-fits-all” free form from one of those “do-it-yourself” sites is acting penny wise and pound foolish. Just hire a darn Internet lawyer to help you if you’re operating a business website. It can save you money and headache in the long run – and investment into the future of your company if you will.
  3. Website Accessibility, and related claims, is a thing! You don’t hear a lot about it because the matters don’t typically make it to court. Many of these cases settle based upon demand letters for thousands of dollars and costly remediation work … so don’t think that it can’t happen to you (if you’re operating a website for your business).

Citation: Lloyd v. Facebook, Inc., Case No. 21-cv-10075-EMC (N.D. Cal, Feb. 7, 2023)

DISCLAIMER: This is for general information only. This is not legal advice nor should it be relied upon as such. If you have concerns regarding your own specific situation, be sure to reach out to an attorney in your jurisdiction who may be able to advise you of your rights.

GoDaddy not liable for third-party snagging prior owned domain – Rigsby v. GoDaddy Inc.

This case should present as a cautionary tale of why you want to ensure you’ve got your auto-renewals on, and you’re ensuring the renewal works, for your website domains if you plan on using them long term for any purpose. Failing to renew timely (or ensuring there is actual renewal) can have unintended frustrating consequences.

Plaintiffs-Appellants: Scott Rigsby and Scott Rigsby Foundation, Inc. (together “Rigsby”).

Defendants-Appellees: GoDaddy, Inc., GoDaddy.com, LLC, and GoDaddy Operating Company, LLC and Desert Newco, LLC (together “GoDaddy”).

Scott Rigsby is a physically challenged athlete and motivational speaker who started the Scott Rigsby Foundation. In 2007, in connection with the foundation he registered the domain name “scottrigsbyfoundation.org” with GoDaddy.com. Unfortunately, and allegedly as a result of a glitch in GoDaddy’s billing system, Rigsby failed to pay the annual renewal fee in 2018. In these instances, typically the domain will then be free to purchase by anyone and this is exactly what happened – a third-party registered the then-available domain name and turned it into a gambling information site. Naturally this is a very frustrating situation for Rigsby.

Rigsby then decided to sue GoDaddy for violations of the Lanham Act, 15 U.S.C. § 1125(a) (which for my non-legal industry readers is the primary federal trademark statute in the United States) and various state laws and sought declaratory and injunctive relief including return of the domain name.

This legal strategy is most curious to me because they didn’t name the third-party that actually purchased the domain and actually made use of it. For those that are unaware, “use in commerce” by the would be trademark infringer is a requirement of the Lanham Act and it seems like a pretty long leap to suggest that GoDaddy was the party in this situation that made use of subject domain.

Rigsby also faced another hurdle, that is, GoDaddy has immunity under the Anticybersquatting Consumer Protection Act, 15 U.S.C. § 1125(d) (“ACPA”). The ACPA limits the secondary liability of domain name registrars and registries for the act of registering a domain name. Rigsby would be hard pressed to show that GoDaddy registered, used, or trafficked in his domain name with a bad faith intent to profit. Similarly, Rigsby would also be hard pressed to show that GoDaddy’s alleged wrongful conduct surpassed mere registration activity.

Lastly, Rigsby faced a hurdle when it comes to Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’ve written about Section 230 may times in my blogs, but in general Section 230 provides immunity to websites/platforms from claims stemming from the content created by third-parties. To be sure, there are some exceptions, including intellectual property law claims. See 47 U.S.C. § 230(e)(2) there wasn’t an act done by GoDaddy that would fairly sit square within the Lanham Act such that they would have liability. So this doesn’t apply. Additionally, 47 U.S.C. § 230(e)(3) preempts state law claims. Put another way, with a few exceptions, a platform will also avoid liability from various state law claims. As such, Section 230 would shield GoDaddy from liability for Rigsby’s state-law claims for invasion of privacy, publicity, trade libel, libel, and violations of Arizona’s Consumer Fraud Act. These are garden variety tort law claims that plaintiff’s will typically assert in these kinds of instances, however, plaintiffs have to be careful that they are directed at the right party … and it’s fairly rare that a platform is going to be the right party in these situations.

The District of Arizona dismissed all of the claims against GoDaddy and Rigsby then appealed the dismissal to the Ninth Circuit Court of Appeals. While sympathetic to the plight of Rigsby, the court correctly concluded, on February 3, 2023, that Rigsby was barking up the wrong tree in terms of who they named as a defendant and appropriately dismissed the claims against GoDaddy.

To read the court’s full opinion which goes into greater detail about the facts of this case, click on the citation below.

Citation: Rigsby v. GoDaddy, Inc., Case No. 21016182 (9th Cir. Feb. 3, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

Section 230 doesn’t protect against a UGC platform’s own unlawful conduct – Fed. Trade Comm’n v. Roomster Corp

This seems like a no-brainer to anyone who understands Section 230 of the Communications Decency Act but for some reason it still hasn’t stopped defendants from making the tried and failed argument that Section 230 protects a platform from their own unlawful conduct.

Plaintiffs: Federal Trade Commission, State of California, State of Colorado, State of Florida, State of Illinois, Commonwealth of Massachusetts, and State of New York

Defendants: Roomster Corporation, John Shriber, indivudally and officer of Roomster, and Roman Zaks, individually and as an officer of Roomster.

Roomster (roomster.com) is an internet-based (desktop and mobile app) room and roommate finder platform that purports to be an intermediary (i.e., the middle man) between individuals who are seeking rentals, sublets, and roommates. For anyone that has been around for a minute in this industry, you might be feeling like we’ve got a little bit of a Roommates.com legal situation going on here but it’s different. Roomster, like may platforms that allows third-party content also known as User Generated Content (“UGC”) platforms, does not verify listings or ensure that the listings are real or authentic and has allegedly allowed postings to go up where the address of the listing was a U.S. Post Office. Now this might seem out of the ordinary to an every day person reading this, but I can assure you, it’s nearly impossible for any UGC platform to police every listing, especially if they are a small company and have any reasonable volume of traffic and it would become increasingly hard to try and moderate as they grow. That’s just the truth of operating a UGC platform.

Notwithstanding these fake posting issues, Plaintiffs allege that Defendants have falsely represented that properties listed on the Roomster platform are real, available, and verified. [OUCH!] They further allege that Defendants have created or purchased thousands of fake positive reviews to support these representations and placed fake rental listings on the Internet to drive traffic to their platform. [DOUBLE OUCH!] If true, Roomster may be in for a ride.

The FTC has alleged that Defendants’ acts or practices violate Section 5(a) of the FTC Act, 15 U.S.C. § 45(a) (which in layman terms is the federal law against unfair methods of competition) and the states have alleged the various state versions of deceptive acts and practices. At this point, based on the alleged facts, it seems about right to me.

Roomster filed a Motion to Dismiss pursuant to Rule 12(b)(6) for Plaintiffs alleged failure to state a claim for various reasons that I won’t discuss, but you can read about in the case, but also argued that “even if Plaintiffs may bring their claims, Defendants cannot be held liable for injuries stemming from user-generated listings and reviews because … they are interactive computer service providers and so are immune from liability for inaccuracies in user-supplied content, pursuant to Section 230 of the Communications Decency Act, 47 U.S.C. § 230.” Where is the facepalm emoji when you need it? Frankly, that’s a “hail-mary” and total waste of an argument … because Section 230 does not immunize a defendant from liability from its own unlawful conduct. Indeed, a platform can be held liable for for offensive content on its service or system if it contributes to the development of what makes the content unlawful. This is also true where a platform has engaged in deceptive practices, or has had direct participation in a deceptive scheme. Fortunately, like many courts before it, the court in this case saw through the crap and rightfully denied the Motion to Dismiss on this (and other points).

I smell a settlement in the air, but only time will tell.

Case Citation: Fed. Trade Comm’n v. Roomster Corp., Case No. 22 Civ 7389 (S.D. N.Y., Feb. 1, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.