Section 230 Protects Users Too – Monge v. Univ. of Pa.

One of the rarely discussed points, in the grand scheme of Section 230 chatter anyway, is the fact that Section 230 not only protects various interactive internet platforms, but it also protects users, just like you and me, on the various Internet platforms from third-party content. For example, if you’re an administrator/moderator of some random Facebook group … generally speaking, Section 230 protects you from legally actionable content that other users post in that group. Just like the interactive Internet platforms, you, as a user of the platform, also get some protections. This is also true, as this case will underscore, if you share an article via email that is alleged to be defamatory. Given the ease and frequency that people like to share information that they don’t necessarily read let alone fact check for themselves, you’d think this would be front and center in more discussions when trying to teach people that Section 230 isn’t all about protecting “big tech”.

To be clear, the fact that Section 230 protects users too isn’t just something determined by the courts through case law, but is something actually spelled out right in the language of the statute itself.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

47 U.S.C. § 230(c)(1) (emphasis of bold and italics added)

The below information is based upon the information provided in the court opinion. I have no independent knowledge about the facts of this case.

Plaintiff: Janet Monge

Defendant: University of Pennsylvania, Deborah Thomas, et al.

HIGH LEVEL OVERVIEW

A University of Pennsylvania faculty member, Dr. Deborah Thomas, shared an article that allegedly defamed Dr. Monge via a list serve to an organization that Dr. Monge is a member of. Obviously upset about the situation Dr. Monge filed a lawsuit for claims of defamation, defamation by implication, false light, and civil aiding and abetting against defendants, including Dr. Thomas. Dr. Thomas filed a Fed. R. Civ. P., Rule 12(b)(6) motion to dismiss for failure to state a claim for which relief can be granted arguing that 47 U.S.C. § 230(c)(1) immunizes her from liability. The court agreed with Dr. Thomas and dismissed the action, with prejudice, i.e., dismissed those claims permanently.

THE LEGAL WEEDS

It is almost funny to say “legal weeds” here because this was an easy Section 230 win. The Court stated “[c]ourts analyzing and applying the CDA have consistently held that distributing, sharing, and forwarding content created and/or developed by a third party is conduct immunized by the CDA” and then goes on to cite six cases supporting this position relating to content that was shared in an internet chat room, via email, and other technologies. The Court similarly dismissed Dr. Monge’s “material contribution” argument, suggesting that Dr. Thomas materially contributed to the alleged defamatory statements by including her own commentary in the email forwarding the articles. The Court’s rationale was that Dr. Thomas “did not add anything new to the articles, or materially modify them, when she shared them via email, so she did not materially contribute to the alleged defamation. The Court again cited to multiple cases supporting this point. Based upon these points, the Court rightfully concluded “Dr. Thomas’s conduct of sharing the allegedly defamatory articles via email is immune from liability under the CDA.”

SUMMARY OF THOUGHTS

As mentioned before, this was an easy Section 230 win. Ironically, this is also one of those instances where you see a Plaintiff upset about the content of something end up making the matter a bigger deal by filing a lawsuit where now you have legal academics talking about the situation which is a prime example of the Streisand effect. I can understand in general why Plaintiffs want to set the record straight when they believe false information has been put out about them. On the other hand, if you’re going to take it to court, it is important to realize that such action often shines a lot of light on an issue that you might rather have just kept to a smaller audience.

Citation: Monge v. Univ. of Pa., Case No. 22-2942 (E.D. Pa. March 10, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

Advertisement

NY District Court Swings a Bat at “The Hateful Conduct Law” – Volokh v. James

This February14th (2023), Valentine’s Day, the NY Federal District Court showed no love for New York’s Hateful Conduct Law when it granted a preliminary injunction to halt it. So this is, to me, an exceptionally fun case because it includes not only the First Amendment (to the United States Constitution) but also Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’m also intrigued because renowned Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc. are the Plaintiffs. If Professor Volokh is involved, it’s likely to be an interesting argument. The information about the case below has been pulled from the Court Opinion and various linked websites.

Plaintiffs: Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc.

Defendant: Letitia James, in her official capacity as New York Attorney General

Case No.: 22-cv-10195 (ALC)

The Honorable Andrew L. Carter, Jr. started the opinion with the following powerful quote:

 “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’”

Matal v. Tam, 137 S.Ct. 1744, 1764 (2017) 

Before we get into what happened, it’s worth taking a moment to explain who the Plaintiffs in the case are. Eugene Volokh (“Volokh”) is a renowned First Amendment law professor at UCLA. In addition, Volokh is the co-owner and operator of the popular legal blog known as the Volokh Conspiracy. Rumble, operates a website similar to YouTube which allows third-party independent creators to upload and share video content. Rumble sets itself apart from other similar platforms because it has a “free speech purpose” and it’s “mission [is] ‘to protect a free and open internet’ and to ‘create technologies that are immune to cancel culture.” Locals Technology, Inc. (“Locals”) is a subsidiary of Rumble and also operates a website that allows third party-content to be shared among paid, and unpaid, subscribers. Similar to Rumble, Locals also reports having a “pro-fee speech purpose” and a “mission of being ‘committed to fostering a community that is safe, respectful, and dedicated to the free exchange of ideas.” Suffice it to say, the Plaintiffs are no stranger to the First Amendment or Section 230. So how did these parties become Plaintiffs? New York tried to pass a well intentioned, but arguably unconstitutional, law that could very well negatively impact them.

On May 14th last year, 2022, some random racist nut job used Twitch (a social media site) to livestream himself carrying out a mass shooting on shoppers at a grocery store in Buffalo, New York. This disgusting act of violence left 10 people dead and three people wounded. As with most atrocities, and with what I call the “train wreck effect”, this video went viral on various other social media platforms. In response to the atrocity New York’s Governor Kathy Hochul kicked the matter over to the Attorney General’s Office for investigation with an apparent instruction to focus on “the specific online platforms that were used to broadcast and amplify the acts and intentions of the mass shooting” and directed the Attorney General’s Office to “investigate various online platforms for ‘civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” Apparently the Governor hasn’t heard about Section 230, but I’ll get to that in a minute. After investigation, the Attorney General’s Office released a report, and later a press release, that stated “[o]nline platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms” because an alleged “lack of oversight, transparency, and accountability of these platforms allows hateful and extremist views to proliferate online.” This is where one, having any knowledge about this area of law, should insert the facepalm emoji. If you aren’t familiar with this area of law, this will help explain (a little – we’re trying to keep this from being a dissertation).

Now no reasonable person will disagree that this event was tragic and disgusting. Humans are weird beings and for whatever reason (though I suspect a deep dive into psychology would provide some insight), we cannot look away from a train wreck. We’re drawn to it like a moth to a flame. Just look at any news organization and what is shared. You can’t tell me that’s not filled with “train wreck” information. Don Henley said it best in his lyrics in the 1982 song Dirty Laundry, talking about the news: “she can tell you about the plane crash with a gleam in her eye” … “it’s interesting when people die, give us dirty laundry”. A Google search for the song lyrics will give you full context if you’re not a Don Henley fan … but even 40 plus years later, this is still a truth.

In effort to combat the perceived harms from the atrocity that went viral, New York, on December 3, 2022 enacted The Hateful Conduct Law, entitled “Social media networks; hateful conduct prohibited.” What in the world does that mean? Well, the law applies to “social medial networks” and defined “hateful conduct” as: “[T]he use of a social media network to vilify, humiliate, incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). Okay, but still ..

In explaining The Hateful Conduct Law, and as the Court’s opinion (with citations omitted) explains:

[T]he Hateful Conduct Law requires that social media networks create a complaint mechanism for three types of “conduct”: (1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence. This “conduct” falls within the law’s definition if it is aimed at an individual or group based on their “race”, “color”, “religion”, “ethnicity”, “national origin”, “disability”, “sex”, “sexual” orientation”, “gender identity” or “gender expression”.

The Hateful Conduct Law has two main requirements: (1) a mechanism for social media users to file complaints about instances of “hateful conduct” and (2) disclosure of the social media network’s policy for how it will respond to any such complaints. First, the law requires a social media network to “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” This mechanism must “be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website. . . .” and must “allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.” N.Y. Gen. Bus. Law § 394-ccc(2).

Second, a social media network must “have a clear and concise policy readily available and accessible on their website and application. . . ” N.Y. Gen. Bus. Law § 394-ccc(3). This policy must “include how such social media network will respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3).

The law also empowers the Attorney General to investigate violations of the law and provides for civil penalties for social media networks which “knowingly fail to comply” with the requirements. N.Y. Gen. Bus. Law § 394-ccc(5).

Naturally this raised a lot of questions. How far reaching is this law? Who and what counts as a “social media network”? What persons or entities would be impacted? Who decides what is “hateful conduct”? Does the government have the authority to try and regulate speech in this way?

Two days before the law was to go into effect, on December 1, 2022, the instant action was commenced by the Plaintiffs alleging both facially, and as-applied, challenges to The Hateful Conduct Law. Plaintiffs argued that the law “violates the First Amendment because it: (1) is a content viewpoint-based regulation of speech; (2) is overbroad; and (3) is void for vagueness. Plaintiffs also alleged that the law is preempted by” Section 230 of the Communications Decency Act.

For the full discussion and analysis on the First Amendment arguments, it’s best to review the full opinion, however, the Court’s opinion opened with the following summary of its position (about the First Amendment as applied to the law):

“With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.”

With respect to the preemption argument made by Plaintiffs, that is that Section 230 of the Communications Decency Act preempts the law because it imposes liability on websites by treating them as publishers. As the Court outlines (some citations to cases omitted):

The Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). The Act has an express preemption provision which states that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3).

As compared to the section of the Opinion regarding the First Amendment, the Court gives very little analysis on the Section 230 preemption claim beyond making the following statements:

“A plain reading of the Hateful Conduct Law shows that Plaintiffs’ argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of “hateful conduct” and for failure to disclose their policy on how they will respond to complaints. N.Y. Gen. Bus. Law § 394-ccc(5). The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.” (emphasis added)

Hold up sparkles. So the Court recognizes the fact that platforms cannot be held liable (in these instances anyway) for third-party content, no matter how ugly that content might be, but yet wants to force (punish in my opinion) a platform by forcing them to spend big money on development to create all these content reporting mechanisms, and set transparency policies, for content that they actually have no legal requirement to remove? How does this law make sense in the first place? What is the point (besides trying to trap them into having a policy that if they don’t follow could give rise to an action for unfair or deceptive advertising)? This doesn’t encourage moderation. In fact, I’d argue that it does the opposite and encourages a website to say “we don’t do anything about speech that someone claims to be harmful because we don’t want liability for failing to do so if we miss something.” In my mind, this is a punishment, based upon third-party content. You don’t need a “reporting mechanism” for content that people aren’t likely to find offensive (like cute cat videos). To this end, I can see why Plaintiffs raised a Section 230 preemption argument … because if you drill it down, the law is still trying to force websites to take an action to deal with undesirable third-party content (and then punish them if they don’t follow whatever their policy is). In my view, it’s an attempt to do an end run around Section 230. The root issue is still undesirable third-party content. Consequently, I’m not sure I agree with the Court’s position here. I don’t think the court drilled down enough to the root of the issue.

Either way, the Court did, as explained in the beginning, grant Plaintiff’s Motion for Preliminary Injunction (based upon the First Amendment arguments) which, at current, prohibits New York from trying to enforce the law.

Citation: Volokh v. James, Case No. 22-cv-10195 (ALC) (S.D.N.Y., Feb. 14, 2023)

DISCLOSURE: This is not mean to be legal advice nor should it be relied upon as such.

GoDaddy not liable for third-party snagging prior owned domain – Rigsby v. GoDaddy Inc.

This case should present as a cautionary tale of why you want to ensure you’ve got your auto-renewals on, and you’re ensuring the renewal works, for your website domains if you plan on using them long term for any purpose. Failing to renew timely (or ensuring there is actual renewal) can have unintended frustrating consequences.

Plaintiffs-Appellants: Scott Rigsby and Scott Rigsby Foundation, Inc. (together “Rigsby”).

Defendants-Appellees: GoDaddy, Inc., GoDaddy.com, LLC, and GoDaddy Operating Company, LLC and Desert Newco, LLC (together “GoDaddy”).

Scott Rigsby is a physically challenged athlete and motivational speaker who started the Scott Rigsby Foundation. In 2007, in connection with the foundation he registered the domain name “scottrigsbyfoundation.org” with GoDaddy.com. Unfortunately, and allegedly as a result of a glitch in GoDaddy’s billing system, Rigsby failed to pay the annual renewal fee in 2018. In these instances, typically the domain will then be free to purchase by anyone and this is exactly what happened – a third-party registered the then-available domain name and turned it into a gambling information site. Naturally this is a very frustrating situation for Rigsby.

Rigsby then decided to sue GoDaddy for violations of the Lanham Act, 15 U.S.C. § 1125(a) (which for my non-legal industry readers is the primary federal trademark statute in the United States) and various state laws and sought declaratory and injunctive relief including return of the domain name.

This legal strategy is most curious to me because they didn’t name the third-party that actually purchased the domain and actually made use of it. For those that are unaware, “use in commerce” by the would be trademark infringer is a requirement of the Lanham Act and it seems like a pretty long leap to suggest that GoDaddy was the party in this situation that made use of subject domain.

Rigsby also faced another hurdle, that is, GoDaddy has immunity under the Anticybersquatting Consumer Protection Act, 15 U.S.C. § 1125(d) (“ACPA”). The ACPA limits the secondary liability of domain name registrars and registries for the act of registering a domain name. Rigsby would be hard pressed to show that GoDaddy registered, used, or trafficked in his domain name with a bad faith intent to profit. Similarly, Rigsby would also be hard pressed to show that GoDaddy’s alleged wrongful conduct surpassed mere registration activity.

Lastly, Rigsby faced a hurdle when it comes to Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’ve written about Section 230 may times in my blogs, but in general Section 230 provides immunity to websites/platforms from claims stemming from the content created by third-parties. To be sure, there are some exceptions, including intellectual property law claims. See 47 U.S.C. § 230(e)(2) there wasn’t an act done by GoDaddy that would fairly sit square within the Lanham Act such that they would have liability. So this doesn’t apply. Additionally, 47 U.S.C. § 230(e)(3) preempts state law claims. Put another way, with a few exceptions, a platform will also avoid liability from various state law claims. As such, Section 230 would shield GoDaddy from liability for Rigsby’s state-law claims for invasion of privacy, publicity, trade libel, libel, and violations of Arizona’s Consumer Fraud Act. These are garden variety tort law claims that plaintiff’s will typically assert in these kinds of instances, however, plaintiffs have to be careful that they are directed at the right party … and it’s fairly rare that a platform is going to be the right party in these situations.

The District of Arizona dismissed all of the claims against GoDaddy and Rigsby then appealed the dismissal to the Ninth Circuit Court of Appeals. While sympathetic to the plight of Rigsby, the court correctly concluded, on February 3, 2023, that Rigsby was barking up the wrong tree in terms of who they named as a defendant and appropriately dismissed the claims against GoDaddy.

To read the court’s full opinion which goes into greater detail about the facts of this case, click on the citation below.

Citation: Rigsby v. GoDaddy, Inc., Case No. 21016182 (9th Cir. Feb. 3, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

Section 230 doesn’t protect against a UGC platform’s own unlawful conduct – Fed. Trade Comm’n v. Roomster Corp

This seems like a no-brainer to anyone who understands Section 230 of the Communications Decency Act but for some reason it still hasn’t stopped defendants from making the tried and failed argument that Section 230 protects a platform from their own unlawful conduct.

Plaintiffs: Federal Trade Commission, State of California, State of Colorado, State of Florida, State of Illinois, Commonwealth of Massachusetts, and State of New York

Defendants: Roomster Corporation, John Shriber, indivudally and officer of Roomster, and Roman Zaks, individually and as an officer of Roomster.

Roomster (roomster.com) is an internet-based (desktop and mobile app) room and roommate finder platform that purports to be an intermediary (i.e., the middle man) between individuals who are seeking rentals, sublets, and roommates. For anyone that has been around for a minute in this industry, you might be feeling like we’ve got a little bit of a Roommates.com legal situation going on here but it’s different. Roomster, like may platforms that allows third-party content also known as User Generated Content (“UGC”) platforms, does not verify listings or ensure that the listings are real or authentic and has allegedly allowed postings to go up where the address of the listing was a U.S. Post Office. Now this might seem out of the ordinary to an every day person reading this, but I can assure you, it’s nearly impossible for any UGC platform to police every listing, especially if they are a small company and have any reasonable volume of traffic and it would become increasingly hard to try and moderate as they grow. That’s just the truth of operating a UGC platform.

Notwithstanding these fake posting issues, Plaintiffs allege that Defendants have falsely represented that properties listed on the Roomster platform are real, available, and verified. [OUCH!] They further allege that Defendants have created or purchased thousands of fake positive reviews to support these representations and placed fake rental listings on the Internet to drive traffic to their platform. [DOUBLE OUCH!] If true, Roomster may be in for a ride.

The FTC has alleged that Defendants’ acts or practices violate Section 5(a) of the FTC Act, 15 U.S.C. § 45(a) (which in layman terms is the federal law against unfair methods of competition) and the states have alleged the various state versions of deceptive acts and practices. At this point, based on the alleged facts, it seems about right to me.

Roomster filed a Motion to Dismiss pursuant to Rule 12(b)(6) for Plaintiffs alleged failure to state a claim for various reasons that I won’t discuss, but you can read about in the case, but also argued that “even if Plaintiffs may bring their claims, Defendants cannot be held liable for injuries stemming from user-generated listings and reviews because … they are interactive computer service providers and so are immune from liability for inaccuracies in user-supplied content, pursuant to Section 230 of the Communications Decency Act, 47 U.S.C. § 230.” Where is the facepalm emoji when you need it? Frankly, that’s a “hail-mary” and total waste of an argument … because Section 230 does not immunize a defendant from liability from its own unlawful conduct. Indeed, a platform can be held liable for for offensive content on its service or system if it contributes to the development of what makes the content unlawful. This is also true where a platform has engaged in deceptive practices, or has had direct participation in a deceptive scheme. Fortunately, like many courts before it, the court in this case saw through the crap and rightfully denied the Motion to Dismiss on this (and other points).

I smell a settlement in the air, but only time will tell.

Case Citation: Fed. Trade Comm’n v. Roomster Corp., Case No. 22 Civ 7389 (S.D. N.Y., Feb. 1, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

It’s hard to find caselaw to support your claims when you have none – Wilson v. Twitter

When the court’s opinion is barely over a page when printed, it’s a good sign that the underlying case had little to no merit.

This was a pro se lawsuit, filed against Twitter, because Twitter suspended at least three of Plaintiff’s accounts which were used to “insult gay, lesbian, bisexual, and transgender people for violating the company’s terms of service, specifically its rule against hateful conduct.”

Plaintiff sued Twitter alleging that “[Twitter] suspended his accounts based on his heterosexual and Christian expressions” in violation of the First Amendment, 42 U.S.C. § 1981, Title II of the Civil Rights Act of 1964, and for alleged “legal abuse.”

The court was quick to deny all of the claims explaining that:

  1. Plaintiff had no First Amendment claim against Twitter because Twitter was not a state actor; having to painfully explain that just because Twitter was a publicly traded company it doesn’t transform Twitter into a state actor.
  2. Plaintiff had no claim under § 1981 because he didn’t allege racial discrimination.
  3. Plaintiff’s Civil Rights claim failed because: (1) under Title II, only injunctive relief is available (not damages like Plaintiff wanted); (2) Section 230 of the Communications Decency Act bars his claim; and (3) because Title II does not prohibit discrimination on the basis of sex or sexual orientation (an no facts were asserted to support this claim).
  4. Plaintiff failed to allege any conduct by Twitter that cold plausibly amount to legal abuse.

The court noted that Plaintiff “expresses his difficulty in finding case law to support his claims.” Well, I guess it would be hard to find caselaw to support claims when you have no valid ones.

Citation: Wilson v. Twitter, Civil Action No. 3:20-0054 (S.D. W.Va. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Section 230, the First Amendment, and You.

Maybe you’ve heard about “Section 230” on the news, or through social media channels, or perhaps by reading a little about it through an article written by a major publication … but unfortunately, that doesn’t mean that the information that you have received is necessarily accurate. I cannot count how many times over the last year I’ve seen what seems to be purposeful misstatements of the law … which then gets repeated over and over again – perhaps to fit some sort of political agenda. After all, each side of the isle so to speak is attacking the law, but curiously for different reasons. While I absolutely despise lumping people into categories, political or otherwise, the best way I can describe the ongoing debate is that the liberals believe that there is not enough censoring going on, and the conservatives think there is too much censorship going on. Meanwhile, you have the platforms hanging out in the middle often struggling to do more, with less…

In this article I will try to explain why I believe it is important that even lay people understand Section 230 and dispel some of the most common myths that continually spread throughout the Internet as gospel … even from our own Congressional representatives.

WHY LAY PEOPLE SHOULD CARE ABOUT SECTION 230

Not everyone who reads this will remember what it was like before the Internet. If you’re not, ask your elders what it was like to be “talked at” by your local television news station or news paper. There was no real open dialog absent face to face or over the telephone communications. Your audience was limited in who you would get to share information with. Even if you wrote a “letter to the Editor” at a local newspaper it didn’t mean that your “opinion” was necessarily going to be posted. If you wanted to share a picture, you had to actually use a camera and film, take it to a developer, wait two weeks, pay for the developing and pray that your pictures didn’t suck. Can’t tell you how many blurry photographs I have in a shoe box somewhere. Then you had to mail, hand out, or show your friends in person. And don’t even get me started about a phone that was stuck to the wall and your “privacy” was limited to having a long phone chord that might stretch into the bathroom so you could shut the door. If you’re old end enough to remember that, and are nodding your head in agreement … I encourage you to spend some time remembering what that was like. It seems that us non-digital natives are at a point in life that we take the technology we have for granted; and the digital natives (meaning they were born with all of this technology) don’t really know the struggles of life without it.

If you like being able to share information freely, and to comment on information freely, you absolutely should care about what many refer to as “Section 230.” So many of my friends, family and colleagues say “I don’t understand Section 230 and I don’t care to … that’s your space” yet these are the people that I see posting content online about their business via LinkedIn or other social media platforms, sharing reviews of businesses they have been to, looking up information on Wikimedia, sharing their general opinion and/or otherwise dialog and debate over topics that are important to them, etc. In a large way, whether you know it or not, Section 230 has powered your ability to interact online in this way and has drastically shaped the Internet as we know it today.

IN GENERAL: SECTION 230 EXPLAINED

The Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA” or even “CDA 230”), in brief, is a federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Generally speaking, if the platform didn’t actually create the content, they typically aren’t liable for it. Indeed, there are a few exceptions, but for now, we’ll keep this simple. Platforms that allow interactive third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, Tripadvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. Suffice it to say, there are WAY more little guys than there are big guys, or “Big Tech” as some refer to it.

If you’re looking for some sort of a deep dive on the history of the law, I encourage you to pick up a copy of Jeff Kosseff’s book titled The Twenty-Six Words That Created The Internet. It’s a great read!

ONGOING “TECHLASH” WITH SECTION 230 IN THE CROSS-HAIRS

One would be entirely naïve to suggest that the Internet is perfect. If you ask me, it’s far from perfect. I readily concede that indeed there are harms that happen online. To be fair, harms happen offline too and they always have. Sometimes humans just suck. I’ve discussed a lot of this in my ongoing blog article series Fighting Fair on the Internet. What has been interesting to me is that many seem to want to blame people’s bad behavior on technology and to try and hold technology companies liable for what bad people do using their technology.

I look at technology as a tool. By analogy, a hammer is a tool yet we don’t hold the hammer manufacturing company or the store that sold the hammer to the consumer liable when a bad guy goes and beats someone to death with it. I imagine the counter-argument is that technology is in the best position to help stop the harms. Perhaps that may be true to a degree (and I believe many platforms do try to assist by moderating content and otherwise setting certain rules for their sites) but the question becomes, should they actually be liable? If you’re a Section 230 “purist” the answer is “No.” Why? Because Section 230 immunizes platforms from liability for the content that other people say or do on their platforms. Platforms are still liable for the content they choose to create and post or otherwise materially contribute to (but even that is getting into the weeds a little bit).

The government, however, seems to have its’ own set of ideas. We already saw an amendment to Section 230 with FOSTA (the anti-sex trafficking amendment). Unfortunately, good intentions often make for bad law, and, in my opinion, FOSTA was one of those laws which has been arguably proven to cause more harm than good. I could explain why, but I’ll save that discussion for another time.

Then, in February of 2020, the DOJ had a “workshop” on Section 230. I was fortunate enough to be in the audience in Washington, D.C. where it was held and recently wrote an article breaking down that “workshop.” If you’re interested in all the juicy details, feel free to read that article but in summary it basically was four hours’ worth of : humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.

Shortly thereafter the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 or EARN IT Act of 2019-2020 Bill was dropped which is designed to prevent the online sexual exploitation of children. While this sounds noble (FOSTA did too) when you unpack it all, and look at the bigger picture, it’s more government attempts to mess with free speech and online privacy/security in the form of yet another amendment to Section 230 under the guise of being “for the children.” I have lots of thoughts on this, but I will save this for another article another day too.

This brings us to the most recent attack on Section 230. The last two (2) weeks have been a “fun” time for those of us who care about Section 230 and its application. Remember how I mentioned above that some conservatives are of the opinion that there is too much censorship online? This often refers to the notion that social media platforms (Facebook, Twitter, and even Google) censor or otherwise block conservative speech. Setting aside whether this actually happens or not (I’ve heard arguments pointing both directions on this issue) President Trump shined a big light on this notion.

Let me first start off by saying that there is a ton of misinformation that is shared online. It doesn’t help that many people in society will quickly share things without actually reading it or conducting research to see if the content they are sharing has any validity to it but will spend 15 minutes taking a data mining quiz only to find out what kind of a potato they are. As a side note, I made that up in jest and then later found out that there is a quiz to find out what kind of potato you are. Who knew the 2006 movie Idiocracy was going to be so prophetic? Although, I can’t really say this is somehow just something that happens online? Anyone that ever survived junior high and high school knows that gossip is often riddled with misinformation and somehow we seem to forget about the silliness that happens offline too. The Internet, however, has just given the gossipers a megaphone … to the world.

Along with other perceived harmful content, platforms have been struggling with how to handle such misinformation. Some have considered adding more speech by way of notifications or “labels” as Twitter calls them, to advise their users that the information may be wholly made up or modified, shared in a deceptive manner, likely to impact public safety or otherwise cause serious harm. Best I could tell, at least as far as Twitter goes, this seems to be a relatively new effort. Other platforms like Facebook have apparently resorted to taking people’s accounts down, putting odd cover ups over photos, etc. on content they deem “unworthy” for their users. Side note: While ideal in a perfect world, I’m not personally a fan of social media platforms fact checking because: 1) it’s very hard to be an arbiter of truth; 2) it’s incredibly hard to do it at scale; 3) once you start, people will expect you to do it on every bit of content that goes out – and that’s virtually impossible; and 4) if you fail to fact check something that turns out to be false or otherwise misleading, one might assume that such content is accurate because they come to rely on the fact checking. And who checks the fact checkers? Not that my personal opinion matters, but I think this is where this bigger tech companies have created more problems for themselves (and arguably all the little sites that rely on Section 230 to operate without fear of liability).

So what kicked off the latest “Section 230 tirade”? Twitter “fact checked” President Trump in two different tweets on May 26th, 2020 by adding in a “label” to the bottom of the Tweets (which you have to click on to actually see – they don’t transfer when you embed them as I’ve done here) that said “Get the facts about mail-in-ballots.” This clearly suggests that Twitter was in disagreement with information that the President Tweeted and likely wanted its users to be aware of alternative views.

https://twitter.com/realDonaldTrump/status/1265255845358645254?s=20

To me, that doesn’t seem that bad. I can absolutely see some validity to President Trump’s concern. I can also see an alternative argument, especially since I typically mail in my voting ballot. Either way, adding content in this way, versus taking it down altogether, seems like the route that provides people more information to consider for themselves, not less. In any event, if you think about it, pretty much everything that comes out of a politician’s mouth is subjective. Nevertheless, President Trump got upset over the situation and then suggested that Twitter was “completely stifling FREE SPEECH” and then made veiled threats about not allowing that to happen.

https://twitter.com/realDonaldTrump/status/1265427539008380928?s=20

If we know anything about this President, it is that when he’s annoyed with something, he will take some sort of action. President Trump ultimately ended up signing an Executive Order on “Preventing Online Censorship” a mere two (2) days later. For those that are interested, while certainly left leaning, and non-favorable to our commander in chief, Santa Clara Law Professor Eric Goldman provided a great legal analysis of the Executive Order, calling it “political theater.” Even if you align yourself with the “conservative” base, I would encourage you to set aside the Professor’s personal opinions (we all have opinions) and focus on the meat of the legal argument. It’s good.

Of course, and as expected, the Internet looses its mind and all the legal scholars and practitioners come out of the woodwork, commenting on Section 230 and the newly signed Executive Order, myself included. The day after of the Executive Order was signed (and likely President Trump read all the criticisms) he Tweeted out “REVOKE 230!”

https://twitter.com/realDonaldTrump/status/1266387743996870656?s=20

So this is where I have to sigh heavily. Indeed there is irony in the fact that the President is calling for the revocation of the very same law that allowed innovation and Twitter to even become a “thing” and which also makes it possible for him to reach out and connect to millions of people, in real time, in a pretty much unfiltered way as we’ve seen, for free because he has the application loaded on his smart phone. In my opinion, but for Section 230, it is entirely possible Twitter, Facebook and all the other forms of social media and interactive user sites would not exist today; at least not as we know it. Additionally, I find it ironic that President Trump is making free speech arguments when he’s commenting about, and on, a private platform. For those of you that slept through high school civics, the First Amendment doesn’t apply to private companies … more about that later.

As I said though, this attack on Section 230 isn’t just stemming from the conservative side. Even Joe Biden has suggested that Section 230 should be “repealed immediately” but he’s on the whole social media companies censor too little train which is completely opposite of the reasons that people like President Trump wants it revoked.

HOW VERY AMERICAN OF US

How many times have you heard that American’s are self-centered jerks? Well, Americans do love their Constitutional rights, especially when it comes to falling in love with their own opinions and the freedom to share those opinions. Moreover, when it comes to the whole content moderation and First Amendment debate, we often look at tech giants as purely American companies. True, these companies did develop here (arguably in large part thanks to Section 230) however, what many people fail to consider is that many of these platforms operate globally. As such, they are often trying to balance the rules and regulations of the U.S. with the rules and regulations of competing global interests.

As stated, Americans are very proud of the rights granted to them, including the First Amendment right to free speech (although after reading some opinions lately I’m beginning to wonder if half the population slept through or otherwise skipped high school civics class … or worse, slept through Constitutional Law while in law school). However, not all societies have this speech right. In fact, Europe’s laws value the privacy as a right, over the freedom of expression. A prime example of this playing out is Europe’s Right to Be Forgotten law. If you aren’t familiar, under this EU law, citizens can ask that even truthful information, but perhaps older, be taken down from the Internet (or in some cases not be indexed on EU search engines) or else the company hosting that information can face penalty.

When we demand that these tech giants cater to us, here in the United States, we are forgetting that these companies have other rules and regulations that they have to take into consideration when trying to set and implement standards for their users. What is good for us here in the U.S. may not be good for the rest of the world, which are also their customers.

SECTION 230 AND FIRST AMENDMENT MYTHS SPREAD LIKE WILDFIRE

What has been most frustrating to me, as someone who practices law in this area and has a lot of knowledge when it comes to the business of operating platforms, content moderation, and the applicability of Section 230, is how many people who should know better get it wrong. I’m talking about our President, Congressional representatives, and media outlets … so many of them, getting it wrong. And what happens from there? You get other people who regurgitate the same uneducated or otherwise purposefully misstatements in articles that get shared which further perpetuates the ignorance of the law and how things actually work.

For example, just today (June 8, 2020) Jeff Kosseff Tweeted out a thread that describes a history of the New York Times failing to accurately explain Section 230 in various articles and how one of these articles ended up being quoted by a NJ federal judge. It’s a good thread. You should read it.

MYTH: A SITE IS EITHER A “PLATFORM” OR A “PUBLISHER”

Contrary to so many people I’ve listened to speak, or articles that I’ve read, when it comes to online UGC platforms, there is no distinction between “publisher” and a “platform.”  You aren’t comparing the New York Times to Twitter.  Working for a newspaper is not like working for a UGC platform.  Those are entirely different business models … apples and oranges. Unfortunately, this is another spot where many people get caught up and confused. 

UGC platforms are not in the business of creating content themselves but rather in the business of setting their own rules and allowing third-parties (i.e., you and I here on this platform) to post content in accordance with those rules.  Even for those who point to some publications erring on the side of caution on 2006-2008 re editing UGC comments doesn’t mean that’s how the law actually was interpreted.  We have decades worth of jurisprudence interpreting Section 230 (which is what the judicial branch does – interprets the law, not the FCC which is an independent organization overseen by Congress). UPDATE 1/5/2021 – although now there is debate on whether or not they can and as of October 21, 2020 the FCC seems to think they do have such right to interpret it.  Platforms absolutely have the right to moderate the content which they did not create and kick people off of their platform for violation of their rules. 

Think if it this way – have you ever heard your parents say (or maybe you’ve said this to your own kids) “My house, my rules.  If you don’t like the rules, get your own house.”  If anyone actually researches the history, that’s why Section 230 was created … to remove the moderator’s dilemma.  A platform’s choice of what to allow, or disallow, has no bearing (for the sake of this argument here) on the applicability of Section 230.  Arguably, UGC platforms also have a First Amendment right to choose what they want to publish, or not publish. So even without Section 230, they could still get rid of content they didn’t deem appropriate for their users/mission/business model.

MYTH: PLATFORMS HAVE TO BE NEUTRAL FOR SECTION 230 TO APPLY

Contrary to the misinformation being spewed all over (including by government representatives – which I find disappointing) Section 230 has never had a “neutrality” caveat for protection.  Moreover, in the context of the issue of political speech, Senator Ron Wyden, who was a co-author for the law even stated recently on Twitter “let me make this clear: there is nothing in the law about political neutrality.” 

You can’t get much closer to understanding Congressional intent of the law than getting words directly from the co-author of the law. 

Quite frankly, there is no such thing as a “neutral platform.” That’s like saying a cheeseburger is a cheeseburger is a cheeseburger. Respectfully, some cheeseburgers from certain restaurants are just way better than others. Moreover, if we limited content on platforms to only what is lawful, i.e., a common carrier approach where the platforms would be forced to treat all legal content equally and refrain from discrimination, as someone that deals with content escalations for platforms, I can tell you that we would have a very UGLY Internet because sometimes people just suck or their idea of a good time and funny isn’t exactly age appropriate for all views/users.

MYTH: CENSORSHIP OF SPEECH BY A PLATFORM VIOLATES THE FIRST AMENDMENT

The First Amendment absolutely protects the freedom of speech.  In theory, you are free to put on a sandwich board that says (insert whatever you take issue with) and walk up and down the street if you want.  In fact, we’re seeing such constitutionally protected demonstrations currently with the protesters all over the country in connection to the death of George Floyd. Peaceful demonstration (and yes, I agree, not all of that was “peaceful”) is absolutely protected under the First Amendment. 

What the First Amendment does not do (and this seems to get lost on people for some reason) is give one the right to amplification of that speech on a private platform.  One might wish that were the case, but wishful thinking does equal law. Unless and until there is some law, that passes judicial scrutiny, which deems these private platforms a public square subject to the same restrictions that is imposed on the government, they absolutely do not have to let you say everything and anything you want. Chances are, this is also explained in their Terms of Service, which you probably didn’t read, but you should.

If you’re going to listen to anyone provide an opinion on Section 230, perhaps one would want to listen to a co-author of the law itself:

Think of it this way, if you are a bar owner and you have a drunk and disorderly guy in you bar that is clearly annoying your other customers, would you want the ability to 86 the person or do you want the government to tell you that as long as you are open to the public you have to let that person stay in your bar even if you risk losing other customers because someone is being obnoxious? Of course you want to be able to bounce that person out! It’s not really any different for platform operators.

So for all of you chanting about how a platforms censorship of your speech on their platform is impacting your freedom of speech – you don’t understand the plain language of the First Amendment. The law is “Congress shall make no law … abridging the freedom of speech…” not “any person or entity shall make no rule abridging the freedom of speech…”, which is what people seem to think the First Amendment says or otherwise wants the law to say.

LET’S KEEP THE CONVERSATION GOING BUT NOT MAKE RASH DECISIONS

Do platforms have the best of both worlds … perhaps.  But what is worse?  The way it is now with Section 230 or what it would be like without Section 230?  Frankly, I choose a world with Section 230.  Without Section 230, the Internet as we know it will change. 

While we’ve never seen what the Internet looks like without Section 230 I can imagine we would go to one of two options: 1) an Internet where platforms are afraid to moderate content and therefore everything and anything would go up, leaving us with a very ugly Internet (because people are unfathomably rude and disgusting – I mean, content moderators have suffered from PTSD for having to look at what nasty humans try to share); or 2) an Internet where platforms are afraid of liability and either UGC sites will cease to exist altogether or they may go to a notice and take down model where as soon a someone sees something they are offended by or otherwise don’t like, they will tell the platform the information is false, defamatory, harassing, etc. and that content would likely automatically come down. The Internet, and public discussion, will be at the whim of a heckler’s veto. You think speech is curtailed now? Just wait until the society of “everyone is offended” gets a hold of it.

As I mentioned to begin with, I don’t think that the Internet is perfect, but neither are humans and neither is life. While I believe there may be some concessions to be had, after in-depth studies and research (after all, we’ve only got some 24 years of data to work with and those first years really don’t count in my book) I think it foolish to be making rash decisions based upon political agendas. If the politicians want their own platform where they aren’t going to be “censored” and the people have ease of access to such information … create one! If people don’t like that platforms like Twitter, Facebook, or Google are censoring content … don’t use them or use them less. Spend your time and money with a platform that more aligns with your desires and beliefs. There isn’t one out there? Well, nothing is stopping you from creating your own version (albeit, I understand that it’s easier said than done … but there are platforms out there trying to make that move). That’s what is great about this country … we have the ability to innovate … we have options … well, at least for now.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Breaking down the DOJ Section 230 Workshop: Stuck in the Middle With You

The current debate over Section 230 of the Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA”) has many feeling a bit like the lyrics from Stealers Wheel – Stuck in The Middle With You, especially the lines where it says “clowns to the left of me, jokers to my right, here I am stuck in the middle with you.” As polarizing as the two extremes of the political spectrum seem to be these days, so are the arguments about Section 230.  Arguably the troubling debate is compounded by politicians who either don’t understand the law, or purposefully make misstatements about the law in attempt to further their own political agenda.

For those who may not be familiar with the Communications Decency Act, in brief, it is federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Platforms that allow third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, TripAdvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. 

So, what’s the debate over?  Essentially the difficult realities about humans and technology.  I doubt there would be argument over the statement that the Internet has come a long way since the early days of CompuServe, Prodigy and AOL. I also believe that there would be little argument that humans are flawed.  Greed was prevalent and atrocities were happening long before the advent of the Internet.  Similarly, technology isn’t perfect either.  If technology were perfect from the start, we wouldn’t ever need updates … version 1.0 would be perfect, all the time, every time.  That isn’t the world that we live in though … and that’s the root of the rub, so to speak.

Since the enactment of the CDA, an abundance of lawsuits have been initiated against platforms, the results of which further defined the breadth of the law.  For those really wanting to learn more and obtain a more historical perspective on how the CDA came to be, one could read Jeff Kosseff’s book called The Twenty Six Words That Created the Internet.  To help better understand some of the current debate over this law which will be discussed shortly, this may be a good opportunity to point out a few of the (generally speaking) practical implications of Section 230:

  1. Unless a platform wholly creates or materially contributes to content on its platform, it will not be held liable for the content created by a third-party.  This immunity from liability has also been extended to other tort theories of liability where it is ultimately found that such theory stems from the third-party content.
  2. The act of filtering content by a platform does not suddenly transform it into a “publisher” aka the person that created the content in the first place, for the purposes of imposing liability.
  3. A platform will not be liable for their decision to keep content up, or take content down, regardless of whether such information may be perceived as harmful (such as content alleged to be defamatory). 
  4. Injunctive relief (such as a take down order from a court) is legally ineffective against a platform if such order relates to content that they would have immunity for.

These four general principals are the result of litigation that ensued against platforms over the past 23+ years. However, a few fairly recent high-profile cases stemming from atrocities, and our current administration (from the President down), has put Section 230 in the crosshairs and desires for another amendment.  The question is, amendment for what?  One side says platforms censor too much, the other side says platforms censor too little, platforms and technology companies are being pressured to  implement stronger data privacy and security for their users worldwide while the U.S. government is complaining about measures being taken are too strong and therefore allegedly hindering their investigations.  Meanwhile the majority of the platforms are singing “stuck in the middle with you” trying to do the best they can for their users with the resources they have, which unless you’re “big Internet or big tech” is typically pretty limited.  And frankly, the Mark Zuckerberg’s of the world don’t speak for all platforms because not all platforms are like Facebook nor do they have the kind of resources that Facebook has.  When it comes to implementation of new rules and regulations, resources matter.

On January 19, 2020 the United States Department of Justice announced that they would be hosting a “Workshop on Section 230 of the Communications Decency Act” on February 19, 2020 in Washington, DC.  The title of the workshop “Section 230 – Nurturing Innovation or Fostering Unaccountability?”  The stated purpose of the event was to “[D]iscuss Section 230 … its expansive interpretation by the courts, its impact on the American people and business community, and whether improvements to the law should be made.”  The title of the workshop was intriguing because it seemed to suggest that the answer was one or the other when the two concepts are not mutually exclusive.

On February 11, 2020 the formal agenda for the workshop (the link to which has since been removed from the government’s website) was released.  The agenda outlined three separate discussion panels:

  • Panel 1:  Litigating Section 230 which was to discuss the history, evolution and current application of Section 230 in private litigation;
  • Panel 2: Addressing Illicit Activity Online which was to discuss whether Section 230 encourages or discourages platforms to address online harms, such as child exploitation, revenge porn, and terrorism, and its impact on law enforcement; and
  • Panel 3: Imagining the Alternative which was to discuss the implications on competition, investment, and speech of Section 230 and proposed changes. 

The panelists were made up of legal scholars, trade associations and a few outside counsel who represent plaintiffs or defendants.  More specifically, the panels were filled with many of the often empaneled Section 230 folks including legal scholars like Eric Goldman, Jeff Kosseff; Kate Klonik, Mary Ann Franks, and staunch anti- Section 230 attorney Carrie Goldberg, a victim’s rights attorney that specializes in sexual privacy violations.  Added to the mix was also Patrick Carome who is famous for his Section 230 litigation work, defending many major platforms and organizations like Twitter, Facebook, Google, Craigslist, AirBnB, Yahoo! and the Internet Association.  Other speakers included Annie McAdams, Benjamin Zipupsky, Doug Peterson, Matt Schruers, Yiota Souras, David Chavern, Neil Chilson, Pam Dixon, and Julie Samuels.

A review of the individual panelist’s bios would likely signal that the government didn’t want to include the actual stakeholders, i.e., representation from any platform’s in-house counsel or in-house policy.  While not discounting the value of the speakers scheduled to be on panel, one may find it odd that those who deal with the matters every day, who represent entities that would be the most impacted by modifications to Section 230, who would be in the best position to determine what is or is not feasible to implement in the terms of changes, if changes to Section 230 were to happen, had no seat at the discussion table.  This observation was wide spread … much discussion on social media about the lack of representation of the true “stakeholders” took place with many opining that it wasn’t likely to be a fair and balanced debate and that this was nothing more than an attempt by U.S. Attorney General William Barr to gather support for the bill relating to punishing platforms/tech companies for implementing end-to-end encryption.  One could opine that the Bill really has less to do with Section 230 and more to do with the Government wanting access to data that platforms may have on a few perpetrators who happen to be using a platform/tech service.

If you aren’t clear on what is being referenced above, it bears mentioning that there is a Bill titled “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019” aka “EARN IT Act of 2019” that was proposed by Senator Lindsey Graham.  This bill came approximately two weeks after Apple was ordered by AG Barr to unlock and decrypt the Pensacola shooter’s iPhone.  When Apple responded that they couldn’t comply with the request, the government was not happy.  An article written by CATO Institute stated that “During a Senate Judiciary hearing on encryption in December Graham issued a warning to Facebook and Apple: ‘this time next year, if we haven’t found a way that you can live with, we will impose our will on you.’”  Given this information, and the agenda topics, the timing of the Section 230 workshop seemed a bit more than coincidence.  In fact, according to an article in Minnesota Lawyer, Professor Eric Goldman pointed out that the “DOJ is in a weird position to be convening a roundtable on a topic that isn’t in their wheelhouse.”

As odd as the whole thing may have seemed, I had the privilege of attending the Section 230 “Workshop”.  I say “workshop” because it was a straight lecture without the opportunity for there to be any meaningful Q&A dialog from the audience.  Speaking of the audience, of the people I had direct contact with, the audience consisted of reporters, internet/tech/first amendment attorneys, in-house counsel/representatives from platforms, industry association representatives, individual business representatives, and law students.  The conversations that I personally had, and personally overheard, was suggestive that the UGC platform industry (the real stakeholders) were all concerned or otherwise curious about what the government was trying to do to the law that shields platforms from liability for UGC.

PANEL OVERVIEW:

After sitting through nearly four hours’ worth of lecture, and even though I felt the discussion to be a bit more well-rounded than I anticipated, I still feel that the entire workshop could be summarized as follows: “humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.”

Perhaps that is a bit of an oversimplification but honestly, if you watch the whole lecture, that’s what it boils down to.

The harms discussed during the different panels included:

  • Libel (brief mention)
  • Sex trafficking (Backpage.com, FOSTA, etc.)
  • Sexual exploitation of children (CSAM)
  • Revenge porn aka Non-Consensual Pornography aka Technology Facilitated Harassment
  • Sale of drugs online (brief mention)
  • Sale of alleged harmful products (brief mention)
  • Product liability theory as applied to platforms (ala Herrik v. Grindr)

PANEL #1:

In traditional fashion, the pro-Section 230 advocates explained the history of the CDA, how it is important to all platforms that allow UGC, not just “big tech” and resonated on the social utility of the Internet … platforms large and small.  However, the anti-Section 230 panelists pointed to mainly harms caused by platforms (though not elaborated on which ones) by not removing sexually related content (though defamation was a short mention in the beginning). 

Ms. Adams seemed to focus on sex trafficking – touching on how once Backpage.com was shut down that a similar close site started up in Amsterdam. She referred to the issues she was speaking about as a “public health crisis.” Of course, Ms. Goldberg raised argument relating to the prominent Herrik v Grindr case wherein she argued a product liability theory as a work around Section 230. That case ended when writ was denied by the U.S. Supreme Court in October of 2019. I’ve heard Ms. Goldberg speak on this case a few times and one thing she continually harps on is the fact that the Grindr didn’t have way to keep Mr. Herrik’s ex from using their website. She seems surprised by this. As someone who represents platforms, it makes perfect sense to me. We must not forget that people can create multiple user profiles, from multiple devices, from multiple IP addresses, around the world. Sorry, Plaintiff attorneys…the platforms’ crystal ball is in the shop on these issues … at least for now. Don’t misunderstand me. I believe Ms. Goldberg is fighting the good fight, and her struggle on behalf of her clients is real! I admire her work and no doubt she sees it with a lens from the trenches she is in. That said, we can’t lose sight of reality of how things actually work versus how we’d like them to work.

PANEL #2:

There was a clear plea from Ms. Franks and Ms. Souras for something to be done about sexual images, including those exploiting children.  I am 100% in agreement that while 46 states have enacted anti “revenge porn” or better termed Non-Consensual Pornography laws, such laws aren’t strong enough because of the malicious intent requirement.  All a perpetrator has to say is “I didn’t mean to harm victim, I did it for entertainment” or another seemingly benign purpose and poof – case closed.”  That struggle is difficult! 

No reasonable person thinks these kinds of things are okay yet there seemed to be an argument that platforms don’t do enough to police and report such content.  The question becomes why is that?  Lack of funding and resources would be my guess…either on the side of the platform OR, quite frankly, on a under-funded/under-resourced government or agency to actually appropriately handle what is reported.  What would be the sense of reporting unless you knew for sure that content was actionable for one, and that the agency it is being reported to would actually do anything about it?

Interestingly, Ms. Souras made the comment that after FOSTA no other sites (like Backpage.com) rose up.  Curiously, that directly contradicted Ms. Adams’s statement about the Amsterdam website popping up after Backpage.com was shut down.  So which is it?  Pro-FOSTA statements also directly contradicts what I’ve heard last October at a workshop put on by ASU’s Project Humanities entitled “Ethics and Intersectionality of the Sext Trade” which covered the complexities of sex trafficking and sex work.  Problems with FOSTA was raised during that workshop.  Quite frankly, I see all flowery statements about FOSTA as nothing more than trying to put lipstick on a pig; trying to make a well-intentioned, emotionally driven, law look like it is working when it isn’t.

Outside of the comments by Ms. Franks and Ms. Souras, AG Doug Peterson out of Nebraska did admit that the industry may self-regulate and sometimes that happens quickly, but he still complained that the state criminal law preemption makes his job more difficult and advocated for an amendment to include state and territory criminal law to the list of exemptions.  While that may sound moderate, the two can be different and arguably such amendment would be overbroad when you are only talking about sexual images.  Further, the inclusion of Mr. Peterson almost seemed as a plug in for a subtle push about how the government allegedly can’t do their job without modification to Section 230 – and I think a part of the was leaning towards, while not making a big mention about it, was the end-to-end encryption debate.  In rebuttal to this notion, Matt Schruers suggested that Section 230 doesn’t need to be amended but that the government needs more resources so they can do a better job with the existing laws, and encouraged tech to work to do better as they can – suggesting efforts from both sides would be helpful

One last important point made during this panel was Kate Klonik making the distinction between the big companies and other sites that are hosting non-consensual pornography.  It is important to keep in mind that different platforms have different economic incentives and that platforms are driven by economics.  I agree with Ms. Klonik that we are in a massive “norm setting” period where we are trying to figure out what to do with things and that we can’t look to tech to fix bad humans (although it can help).  Sometimes to have good things, we have to accept a little bad as the trade-off.

PANEL #3

This last panel was mostly a re-cap of the benefits of Section 230; the struggles that we fact when trying to regulate with a one-size fits all mentality and, I think most of the panelists seem to be agreeing that there needs to be some research done before we go making changes because we don’t want unintended consequences.  That is something I’ve been saying for a while and reiterated during the ABA’s Forum on Communications Law Digital Communications Committee hosted a free CLE titled “Summer School: Content Moderation 101” wherein Jeff Kosseff and I, in a moderated panel by Elisa D’Amico, Partner at K&L Gates, discussed Section 230 and a platform’s struggle with content moderation.  Out of this whole panel, the one speaker that had most people grumbling in the audience was David Chavern who is the President of News Media Alliance.  When speaking about solutions, Mr. Chavern likened Internet platforms to that of traditional media as if he was comparing two oranges and opined that platforms should be liable just like newspapers.  Perhaps he doesn’t understand the difference between first party content and third-party content.  The distinction between the two is huge and therefore I found his commentary to be the least relevant and helpful to the discussion. 

SUMMARY:

In summary, there seem to be a few emotion evoking ills in society (non-consensual pornography, exploitation of children, sex trafficking, physical attacks on victims, fraud, and the drug/opioid crisis) that the government is trying to find methods to solve.  That said, I don’t think amending Section 230 is the way to address that unless and until there is reliable and unbiased data that would suggest that the cure won’t be worse than the disease. Are the ills being discussed really prevalent, or do we just think they are because they are being pushed out through information channels on a 24-hour news/information cycle?

Indeed, reasonable minds would agree that we, as a society, should try and stop harms where we can, but we also have to stop regulating based upon emotions.  We saw that with FOSTA and arguably, it has made things more difficult on law enforcement, victims alike and has had unintended consequences, including chilling speech, on others.  You simply cannot regulate the hate out of the hearts and minds of humans and you cannot expect technology to solve such a problem either.  Nevertheless, that seems to be the position of many of the critics of Section 230.

For more reading and additional perspectives on the DOJ Section 230 Workshop, check out these additional links:

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

The Supreme Court of the United States Denies Petition to Review the California Supreme Court’s Decision in Hassell v. Bird

Another win for Section 230 advocates. Back in July I wrote a blog post entitled “Section 230 is alive and well in California (for now) | Hassell v. Bird” which outlined the hotly contested, and widely watched, case that started back in 2014. When I wrote that post I left off saying that “the big question is where will things go from [there].” After all, we have seen, and continue to see, Section 230 come under attack for a host of arguably noble, yet not clearly thought through, reasons including sex-trafficking (resulting in FOSTA).

Many of us practitioners weren’t sure Hassell, after losing her case before the California Supreme Court as it pertained to Yelp, Inc., would actually appeal the matter to the U.S. Supreme Court. This was based upon the fact that: there has been a long line of cases across the country that have held that 47 U.S.C. § 230(c)(1) bars injunctive relief and other forms of liability against Internet publishers for third-party speech; that the U.S. Supreme Court denied another similar petition in not so distant past; and held many years prior, in Zenith Radio Corp. v. Hazeltine Research, Inc., 395 U.S. 100 (1969) that “[o]ne is not bound by a judgment in personam resulting from litigation in which he is not designated as a party or to which he has not been been made a party by service of process.”  

Clearly undeterred, in October of 2018 Hassell filed a Petition for Writ of Certiorari and accompanying Appendix, challenging the California Supreme Court’s ruling, with the U.S. Supreme Court. Respondent, Yelp, Inc. filed its Opposition to the Petition for Writ of Certiorari in December of 2018. Hassell’s Reply in Support of the Petition was filed earlier this month and all the materials were distributed to the Justices to be discussed at the conference scheduled to be held on Friday, January 18, 2019. [I suppose it is good to see that the government shutdown didn’t kick this matter down the road.]

Many of us Section 230 advocates were waiting to see if the U.S. Supreme Court would surprise us by granting the Petition. Nevertheless, based upon today’s decision denying Hassell’s Petition it appears that this matter will, if ever, be reserved for another day and all is status quo with Section 230, for now.

Citations: Hassell v. Bird, 420 P.3d 776, 2018 WL 3213933 (Cal. Sup. Ct. July 2, 2018), cert. denied; Hassell v. Yelp, 2019 WL 271967 (U.S. Jan. 22, 2019)(No. 18- 506)

The Ugly Side of Reputation Management: What Attorneys and Judges Need to Know

Once upon a time, not so long ago, there was no such thing as the Internet.  Information and news came from your local newspaper, television, or radio channel.  Research was done in good old fashioned books, often at your local school, university or public library.  If the content you were seeking was “old” chances are you had to go look at microfiche. For those that are young enough to have no clue what I’m talking about, watch this video. Then BOOM! Along came the internet! Well, sort of.  It was a slow work in progress, but by 1995 the internet was fully commercialized here in the U.S.  Anyone else remember that horrible dial up sound followed by the coolest thing you ever heard in your life “You’ve got mail!“?

As technology and the internet evolved so did the ease of gathering and sharing information; not only by the traditional media, but by every day users of the internet.  I’ve dedicated an entire series of blogs called Fighting Fair on the Internet just to the topic of people’s online use.  Not every person who has access to the internet publishes flattering content (hello Free Speech) nor do they necessarily post truthful content (ewww, defamation).  Of course, not all unflattering content is defamatory, so it’s not illegal to be a crap talker, but some people try to overcome it anyway.  Either way, whether the information is true or false, such content has brought about a whole new industry for people and businesses looking for relief: reputation management.

Leave it to the entrepreneurial types to see a problem and find a lucrative solution to the same.  While there are always legitimate ethical reputation management companies and lawyers out there doing business the right way (and kudos to all of them)…there are those that are, shall we say, operating through more “questionable” means.  Those that want to push the ethical envelope often come up with “proprietary” methods to help clients which are often sold as removal or internet de-listing/de-indexing techniques that may include questionable defamation cases and court orders, use of bogus DMCA take down notices, or “black hat” methods.  In this article I am only going to focus on the questionable defamation cases that result in an order for injunctive relief.

BACKGROUND: QUESTIONABLE DEFAMATION CASES AND COURT ORDERS

UCLA Professor, Eugene Volokh and Public Citizen litigation attorney, Paul Alan Levy, started shedding public light on concerns relating to questionable court orders a few years ago.  In an amicus brief, submit to the California Supreme Court in support of Yelp, Inc. in Hassell v. BirdVolokh offered his findings to the court discussing how default proceedings are “far too vulnerable to manipulation to be trustworthy.”

As the brief says:

Injunctions aimed at removing or deindexing allegedly libelous material are a big practice area, and big business….But this process appears to be rife with fraud and with other behavior that renders it inaccurate. And this is unsurprising, precisely because many such injunctions are aimed at getting action from third parties (such as Yelp or Google) that did not appear in the original proceedings. The adversarial process usually offers some assurance of accurate fact finding, because the defendant has the opportunity and incentive to point out the plaintiff’s misstatements. But many of the injunctions in such cases are gotten through default judgments or stipulations, with no meaningful adversarial participation.

The brief further pointed to seven (7) different methods that plaintiffs were using to obtain default judgments:

(1) injunctions gotten in lawsuits brought against apparently fake defendants;

(2) injunctions gotten using fake notarizations;

(3) injunctions gotten in lawsuits brought against defendants who very likely did not author the supposedly defamatory material;

(4) injunctions that seek the deindexing of official and clearly nonlibelous government documents – with no notice to the documents’ authors – often listed in the middle of a long list of website addresses submitted to a judge as part of a default judgment;

(5) injunctions that seek the deindexing of otherwise apparently truthful mainstream articles from websites like CNN, based on defamatory comments that the plaintiffs or the plaintiffs’ agents may have posted themselves, precisely to have an excuse to deindex the article;

(6) injunctions that seek the deindexing of an entire mainstream media article based on the source’s supposedly recanting a quote, with no real determination of whether the source was lying earlier, when the article was written, or is lying now, prompted by the lawsuit;

(7) over 40 “injunctions” sent to online service providers that appear to be outright forgeries.

Well, isn’t that fun?  Months after the brief was filed in Hassell, Volokh published another article with the title “Solvera Group, accused by Texas AG of masterminding fake-defendant lawsuits, now being sued by Consumer Opinion over California lawsuits.”  What was clear from all of this is that website owners who have been victims of the scheme are likely watching and the authorities are too.  The US Attorney Generals office in the District of Rhode Island and the State of Texas both took interest in these situations…and I suppose it is possible that more will be uncovered as time goes on.

So how are these parties getting away with this stuff?  With the help of unscrupulous reputation management companies, associated defamation attorneys…and, unfortunately, trusting judges.  Some judges have taken steps to correct the problem once the issue was brought to their attention.  As for the attorneys involved, you have to wonder if they were actually “duped” as this Forbes article mentions or do they know what they are doing?  Either way, it’s not a good situation.  This isn’t to necessarily say that every attorney that is questioned about this stuff is necessarily guilty of perpetrating a fraud upon the court or anything like that.  However, it should serve as a cautionary warning that this stuff is real, these schemes are real, clients can be really convincing, and if one isn’t careful and fails to conduct appropriate and precautionary due diligence on a client and/or the documents provided to you by a client…it could easily be a slippery slope into Padora’s box.   After all, no one wants to be investigated by their state bar association (or worse) for being involved with this kind of mess.

Yes, there have been lots of great articles and discussion shedding light on the subject but the question then becomes, how do you tell the difference between a legitimate situation and a questionable situation?  The answer: recognize red flags and question everything.

RED FLAGS THAT SHOULD CAUSE YOU PAUSE

In December of 2016 I had the pleasure of traveling to Miami, FL for the Internet Lawyer Leadership Summit conference to present, for CLE, on multiple topics including this subject.  At that time I provided the group with some “red flags” based upon information I had then.  Since that time I have gained an even greater knowledge base on this subject simply by paying attention to industry issues and reading, a lot.  I have now compiled the following list of cautionary flags with some general examples, and practical advice that, at minimum, should have you asking a few more questions:

RED FLAGS FOR ATTORNEYS

  • If the entity or person feeding you the “lead” is in the reputation management industry.  You want to do some due diligence.  You could be dealing with a total above board individual or entity , and the lead may be 100% legit, BUT the industry seems to consist of multiple “companies” that often lead back to the same individual(s) and just because they are well known doesn’t necessarily mean they are operating above board.  Do your homework before you agree to be funneled any leads.
  • If the client is asking you to make some unusual adjustments to your fee agreement.  Your fee agreement is likely pretty static.  If the client is requesting some unusual adjustments to your agreement that make you feel uncomfortable, you might want to decline representation.
  • If the client already has “all of the documents” and you don’t actually deal with the defendant. We all want to trust our clients, but as some counsel already experienced, just accepting what your client tells you and/or provides you as gospel without a second thought can land you in hot water.  Consider asking to meet the defendant in person or have them appear before a person licensed to give an oath and check identification, such as a notary public of YOUR choosing to ensure the defendant is real and that the testimony that they are giving in the declaration or affidavit is real.  You want to make sure everything adds up and communication by telephone or email may not protect you enough.  When it comes to documents provided by the client, or the alleged post author, watch for the following:
    • Ensure that the address listed on any affidavit or other document isn’t completely bogus.  Run a search on Google – is it even a real address?  For all you know you could be getting an address to the local train track.
    • Ensure that any notary stamp on an affidavit is inconsistent with where the affiant purports to live. It will rarely make sense for an affiant list their address as, for example, Plains, New York but the notary stamp suggests the notary is based out of Sacramento, California. It will make even less sense if the affiant supposedly lives out of country, but is being notarized by a notary in the states.
    • Ensure that the notary is actually a real notary.  You can typically find record of notaries with the Secretary of State that the notary is in.  Make sure they are a real person.  If you really want to be sure that they actually signed your document, and that it wasn’t “lifted” from elsewhere (yay technology) check in with the notary and/or see if their records are on file somewhere publicly that you can check.
  • If the entity alleged to be the plaintiff isn’t actually a real entity in the state that they are purporting in the complaint to be from.  If the plaintiff is supposed to be ABC Ventures, LLC out of San Diego, California, there should be a record of ABC Ventures, LLC actually listed, and active, on the California Corporation Commission website.  The people that you are talking to also should, in theory, be the members/managers of such entity too.  For example, if you are always talking to a “secretary” you might want to insist on a more direct contact.
  • If the person or entity listed to be the plaintiff isn’t actually listed in the subject URL in the complaint.  If a plaintiff is going to bring a case, they should at least have standing to do so.  You should be cautious of any plaintiffs that aren’t actually at issue or fails to have a valid direct connection that would give them standing to bring the claim.
  • If the subject post doesn’t contain any defamatory statements in the first place.  Just because a post isn’t flattering doesn’t mean that it is actually defamatory.  Similarly, public documents aren’t typically seen as defamatory either. Who is saying it is false? Why is the statement false? What evidence supports the allegation that it is false?  
  • If the subject posting is outside of the statute of limitations for bringing claims in the state in which you intend on filing.  Now I know that some may disagree with me, and there may be bar opinions in different states that suggest otherwise, however, if you are presented with a post that is outside of the statute of limitations to bring a claim for defamation, subject to the single publication rule, and there is no real reason for tolling (like it was held in a secret document not generally public – which pretty much excludes the items on the internet) that may be of concern to you.  I wrote before on why statute of limitations is important, especially if you are the type to follow ABA’s Model Rules of Professional Conduct, Rule 3.1.  Even here in Arizona the bar has raised in disciplinary proceedings, in connection with other infractions, concerns about bringing claims outside of the statute of limitations, citing a violation of ER 8.4(d).  See generally, In re Aubuchon233 Ariz. 62 (Ariz. 2013).
  • If a case was filed in a wholly separate state from the Plaintiff and Defendant and you are asked to be “local counsel” to marshal documents to court or simply to submit it to a search engine like Google.  It is not improbable that local counsel will be called to assist with basic filings or to submit an order to Google.  It may be possible that such documents contain questionable materials.  It’s always a good idea to review the materials and give it a heightened level of scrutiny before just marshaling them off to the court or search engine.  This is especially true if the Plaintiff is no longer associated with prior counsel and is just looking for a different lawyer to help with this “one thing” as if a submission from an attorney bears more weight that anyone else submitting it.
  • If the plaintiff claims to already know who the author of a subject alleged defamatory post is, yet the post itself is anonymous.  Yes, it is possible that based on an author’s content, and how much detail is placed in such post, that one might be able to figure out who the author is. However, in my experience, many authors tend to write just vague enough to keep themselves anonymous.  If that is the case, without a subpoena to the content host, how does one actually know who the author is?  Some states like Arizona have specific notice requirements for subpoenas that are seeing identifying user information which require notice being posted in the same manner, through the same medium, in which the subject posting was made.  If a notice isn’t present on the website, there likely wasn’t a subpoena (assuming the website requires strict compliance with the law). Mobilisa, Inc. v. Doe, 170 P.3d 712, 217 Ariz. 103 (Ariz. App., 2007).
  • If the case was settled in RECORD TIME.  Often these matters are being “resolved” within a few weeks to only a couple months.  As most of us know, the wheels of justice are SLOW.
  • If the case is settled without any answers or discovery being done.  This goes to my prior point about knowing who the real author is, or, for that matter, that the allegations in a subject post are even false.
  • If notice about the case was not personally served by a process server.  Many states allow certified mailing for service.  Do you really know who is signing that little green form and accepting service?  Was some random person paid to sign that?

RED FLAGS FOR JUDGES (Consider all of the above generally plus the following)

  • If a Complaint is filed and shortly thereafter a stipulated judgment is presented requesting injunctive relief without the defendant ever actually making an appearance.  This seems to be one of the more popular tactics.  A way to curb this kind of abuse would be to hold a hearing where all parties must appear, in person (especially the named defendant signing the stipulation) before the court before any such injunctive order is signed and entered.
  • If an attorney files an affidavit of making a good faith attempt in order to locate the defendant but discovery was never conducted upon the hosting website.  Many sites will respond to discovery so long as their state laws for obtaining such information (like Arizona’s Mobilisa case) is followed.  Arguably, it is disingenuous for an attorney to say they have tried when they really haven’t.  Chances are, the real author may not even know about the case and entering a default judgment under such circumstances deprives them of the opportunity to appear and defend against the matter.
  •  If you order the parties to appear and then suddenly the case gets dismissed.  It thwarts the scheme when the court requests the parties to appear.  If this happens, in a defamation related case, it could be seen as a red flag.  The plaintiff may very well try to dismiss the action and simply refile under a different plaintiff and defendant name but for the same URL that was originally filed in the prior dismissed action.
  • If the order for injunctive relief contains URLs that were not originally part of the Complaint.  Sneaky plaintiffs and their counsel may attempt to include other postings, from the same or different websites, that are not really at issue and/or that were arguably written by other individuals.  Make sure that the URLs listed on the order are all the same as what is listed on the complaint.
  • If the complaint contains a host of posts, with wide range of dates, and the syntax of the posts are different yet the plaintiff claims that it was written by the same person.  In my experience, very rarely (though it does happen) will one person go on a binge and write a bunch of different posts about one person or entity.  There are typically more than one author involved so if any statement to the alternative should raise a red flag.

Some journalists that have been tracking these kinds of matters think that these schemes may be nearing an end.  I would like to think so, however, in my opinion these problems are far from over unless unsuspecting attorneys, judges, and even websites and search engines get a little more cautious about how they process these court orders for content removal, especially if they are older orders.  I have already discussed why I thought search engine de-indexing isn’t necessarily a viable reputation management solution and in part that is because, arguably, at least for now, Section 230 of the Communications Decency Act  bars injunctive relief, i.e., there is no obligation for websites to remove content anyway.  If a platform or search engine decides to remove content or otherwise de-index content, at least here in the U.S., they are doing so based upon their own company policy…not some legal duty.

In a perfect world none of these issues would exist. Unfortunately, that’s not the world we live in and the best we can do is be vigilant. Hopefully, through this article, I have provided some food for thought for attorneys and judges alike. You never know when such a situation will arise.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon.  All legal questions should be directed to a licensed attorney in your jurisdiction.

“Internet Law” explained

For some reason, every time one says “lawyer” people tend to think of criminal law, family law or personal injury law.  Perhaps because those are very common.  Most people even understand the concept of a corporate or business lawyer, someone who handles trust and estates, or even one that handles intellectual property.  However, when we say “Internet Law” many people get the most confused look on their face and say: “What the heck is that?” If that is you, you’re in good company.  And, to be fair, the Internet really hasn’t been around all that long.

If you were to read the “IT law” page on Wikipedia you’d see a section related to “Internet Law” but even that page falls a little short on a solid explanation – mostly because the law that surrounds the Internet is incredibly vast and is always evolving.

When we refer to “Internet Law” we are really talking about how varying legal principles and surrounding legislation influence and govern the internet, and it’s use.  For example, “Internet Law” can incorporate many different areas of law such as privacy law, contract law and intellectual property law…all which were developed before the internet was even a thing.  You also have to think how the Internet is global and how laws and application of those laws can vary by jurisdiction.

Internet Law can include the following:

  • Laws relating to website design
  • Laws relating to online speech and censorship of the same
  • Laws relating to how trademarks are used online
  • Laws relating to what rights a copyright holder may have when their images or other content is placed and used online
  • Laws relating to Internet Service Providers and what liabilities they may have based upon data they process or store or what their users do on their platforms
  • Laws relating to resolving conflicts over domain names
  • Laws relating to advertisements on websites, through apps, and through email
  • Laws relating to how goods and services are sold online

As you can see just from the few examples listed above, a lot goes into “Internet Law” and many Internet Law attorneys will pick only a few of these areas to focus on because it can be a challenge just to keep up.  Indeed, unlike other areas of law, “Internet Law” is not static and is always evolving.

Do you think you have an Internet Law related question? If you are in the state of Arizona and are looking for that solid “friend in the lawyering business” consider Beebe Law, PLLC!  We truly enjoy helping our  business and individual clients and strive to meet and exceed their goals!  Contact us today.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon.  All legal questions should be directed to a licensed attorney in your jurisdiction.

 

 

 

 

Section 230 is alive and well in California (for now) | Hassell v. Bird

Last week, on July 2, 2018 the Supreme Court of California overturned rulings that arguably threatened the ability for online platform users to share their thoughts and opinions freely by ruling in favor of Yelp in the hotly contested and widely watched Hassell v. Bird case.

For those that aren’t familiar with the underlying facts, I offer the following quick background:

In 2014 a dispute arose between California attorney, Dawn Hassell and her former client, Ava Bird when Bird posted a negative review of Hassell on the popular business review site, Yelp.  Hassell claimed that the content of the post was, among other things, defamatory and commenced an action against Bird for the same in the Superior Court of the County of San Francisco, Case No. CGC-13-530525. Bird failed to appear, and the Court entered a default order in favor of Hassell.  There is question as to whether Bird was actually served.  In addition, the court ordered Yelp, a non-party to the case who did not receive notice of the hearing, to remove reviews purportedly associated with Bird without explanation and enjoined Yelp from publishing any reviews from the suspected Bird accounts in the future.  Yelp challenged this order, but the court upheld its ruling.

Hoping for relief, Yelp appealed the decision to the California Court of Appeal, First Appellate District, Division Four, Case No. A143233. Unfortunately for Yelp, the Appellate Court offered no relief and held that: Yelp was not aggrieved by the judgment; the default judgment which including language requesting non-party Yelp to remove the reviews from the website was proper; that Yelp had no constitutional right to notice and hearing on the trial court’s order to remove the reviews from the website; that the order to remove the reviews from Yelp and to prohibit publication of future reviews was not an improper or overly broad prior restraint; and that the Communications Decency Act (“CDA” or “Section 230”) did not bar the trial court’s order to remove the reviews.

The Appellate Court’s ruling was clearly contrary to precedent in California and elsewhere around the country. Yelp appealed the matter to the California Supreme Court, Case No. S235968, to “protect its First Amendment right as a publisher, due process right to a hearing in connection with any order that targets speech on Yelp’s website, and to preserve the integrity of the CDA” according to the blog post written by Aaron Schur, Yelp’s Deputy General Counsel. While Yelp led the charge, they were not left to fight alone.

The internet rallied in support of Yelp.  Dozens of search engines, platforms, non-profit organizations and individuals who value the free sharing of information and ideas contributed amicus letters and amicus briefs (I co-authored an amicus brief for this case) in support of Yelp, including assistance from those like UCLA Law Professor and Washington Post contributor Eugene Volokh and Public Citizen Litigator, Paul Alan Levy, whose work spotlighted the ease in which bogus court orders and default judgments are obtained for the sole purpose of getting search engines like Google to de-index content.  In case you are wondering, bogus court orders and false DMCA schemes are indeed a real problem that many online publishers face.

On April 3, 2018 the California Supreme Court heard oral argument on the case. On July 2, 2018 the Supreme Court released its 102 page opinion in a 3-1-3 decision (three on a plurality opinion, one swing concurring, and three dissenting via two opinions) holding that Hassell’s failure to name Yelp as a defendant, an end run-around tactic, did not preclude the application of CDA immunity.  The court clearly stated “we must decide whether plaintiffs’ litigation strategy allows them to accomplish indirectly what Congress has clearly forbidden them to achieve directly.  We believe the answer is no.” Based upon this win for the Internet, at least for now, online publishers in California (or those who have had this case thrown at them in demand letters or pleadings since the original trial and appellate court rulings) can breathe a sigh of relief that they cannot be forced to remove third-party content.

Aaron Shur made an important statement in concluding the Yelp blog post “…litigation is never a good substitute for customer service and responsiveness, and had the law firm avoided the courtroom and moved on, it would have saved time and money, and been able to focus more on the cases that truly matter the most – those of its clients.”  It’s important in both our professional and personal life to not get stuck staring at one tree when there is a whole forest of beauty around us.

While this is indeed a win, and returns the law back to status quo in California, it does raise some concern for some that certain comments in the opinion are signaling Congress to modify Section 230, again (referring to the recent enactment of FOSTA).  Santa Clara Law Professor, Eric Goldman broke down the Court’s lengthy opinion (a good read if you don’t want to spend the time to review the full opinion) while pointing out that “fractured opinions raise some doubts about the true holding of [the] case.”  The big question is where will things go from here?  Indeed, only time will tell.

Citation: Hassell v. Bird, 2018 WL 3213933 (Cal. Sup. Ct. July 2, 2018)

Fighting Fair on the Internet – Part 10 | That Would be Harsher Punishment for Internet Defamers Stan…

For many reasons the movie Ms. Congeniality with Sandra Bullock has been a long time favorite of mine.   Especially when she answered the question “What is one of the most important thing our society needs?” with “That would be harsher punishment for parole violators Stan…and world peace!”  I’m pretty sure since that movie first came out in 2000 I have been remixing that one-liner to fit my varying smarty pants comeback needs.  In fact, in muddling to myself just this morning after reviewing some dyspeptic online commentary I determined that I would answer the question “That would be harsher punishment for internet defamers Stan…and world peace!”  It’s true…internet defamers and harassers really do suck.

In my line of work, and in my every day life, I see people being nasty to one another online – and sometimes people really cross the line and forget that words do hurt.  Sometimes I wonder what happened to the good old fashioned “take it out behind the barn and duke it out…looser buys the other guy a drink” form of justice.  Back in the day (and I really hate saying that because I am not THAT old) if anyone ran their mouth in person like they do today online – man, they’d get a beat down and, quite honestly, they would have probably deserved it.  To make matters worse, you get the morons that jump on the keyboard warrior band wagon without having the first clue about what is REALLY going on and they either share the crap out of the false stuff or otherwise join in on the bashing.  When is enough, enough?  What the hell happened to the human connection and manners?  So much of society needs a good metaphorical kick in the teeth.  The First Amendment doesn’t shelter you from false and defamatory statements nor should it be abused as a license to be a jerk-face.  Unfortunately, unlike the “old days,”  it no longer hurts to be stupid and run your mouth.

Indeed I am a Section 230 Communications Decency Act (“CDA”) supporter, because I don’t think that websites should be held liable for the stupid crap that other people do; after all, that mentality is akin to an over weight person blaming the spoon manufacturer for making a spoon that they can use to eat and get fat with.  “…but, but, the spoon made me fat!”  And to those who just read that and got all defensive – clearly my reference isn’t to those who have medical issues or things outside of their control.  I’m talking about the person who is heavy because of purposeful overeating, failing to do exercises, etc.  Sometimes life happens.  We get busy and fail to take care of ourselves as we should but we can’t blame the spoon manufacturer for it.  The spoon didn’t make us fat.  We have no one to blame but ourselves.  This is absolutely no different and trying to hold websites liable for the stupidity of third-parties is asinine to me.  Yes, yes, I am well aware that the CDA protects websites from liability from third-party content, however, it doesn’t seem to stop people and attorneys from filing frivolous lawsuits…but I digress here.  That is another story for another day.  However, I do think that there should be some serious punishment for all these people who purposefully go out of their way to post false and defamatory information about others…the same goes for harassers.  Perhaps if these people got hit harder in the pocket book or were forced into doing community service – like helping with anti-bullying and harassment initiatives, maybe THEN it would slow down. There just needs to be more education and more deterrents.  It’s far too easy to sit behind the keyboard and be mean.  MEAN. PEOPLE. SUCK.

Until next time friends…

 

 

 

Fighting Fair on the Internet – Part 9 |Troubles with Defamatory Online Reviews and Content Scrapers

Content scrapers are problematic for authors, defamation plaintiffs and website operators alike.

There is no doubt that there is typically a clash of interests between authors, defamation plaintiffs and the operators of websites who host public third-party content.  Authors either want the information to stay or be removed; defamation plaintiffs want information removed from the website(s); and website operators, such as many of the online review websites, fight for the freedom of speech and transparency – often arguing, among many other things, that the information is in a public court record anyway so removal is moot.  These kinds of arguments, often surrounding the application of federal law know as the Communications Decency Act, or Section 230 (which arguable provides that websites don’t have to remove content even if it is false and defamatory) are playing out in courts right now.  One example is the case of Hassell v. Bird which is up on appeal before the California Supreme Court relating to a posting on Yelp.  However, in spite of these clashes of interests, there does seem to be a trend emerging where the author, the plaintiffs, and the websites, are actually standing in the same boat facing the the same troublemaker.

Providing some background and context…

COPYRIGHT AND POSTING AN ONLINE REVIEW:  Many people are familiar with the term “copyright” and have a basic understanding that a copyright is a legal right that is created by the law that gives the creator of an original work limited exclusive rights for its use and distribution.  Wikipedia has some decent general information if you are interested in learning more.  For example, a guy who I will call John for the purpose of this story, can get on a computer and draft up a complaint about Jane and her company XYZ  before he posts it online on a review website.  As it sits on John’s computer as written, John would own the copyright to that information.  When John decides to post it online to a review website, depending on the website’s terms of service John may have assigned his copyright rights to the website in which he was posting on.  So either John or the website may own the copyright to that content.  That point is important for a few reasons, and there are arguments for and against such an assignment, but those issues are for another article some other time.

DEFAMATORY POSTING IS PUBLISHED ONLINE:  Continuing with the story, let’s say that John makes a bad call in judgment (because he hasn’t sat through one of my seminars relating to internet use and repercussions from the same, or hasn’t read my article on not being THAT guy, and doesn’t realize how bad doing this really is) and decides to post his false and defamatory posting about Jane and XYZ to an online review website.  It’s totally NOT COOL that he did that but let’s say that he did.  Now that posting is online, being indexed by search engines like Google, and anyone searching for Jane or XYZ might be seeing John’s posting.

WHAT TO DO WITH THE DEFAMATORY POSTINGS:  The internet tends to work at lightening speed and John’s post is sure to be caught on to by Jane or by someone who knows Jane or her company XYZ.  As an aside, I always recommend that people and businesses periodically, like once a month, run searches about themselves or businesses just to see what pops up.  It’s just a good habit to get into because if there is a problem you will want to address it right away – especially you think it is false and defamatory and want to take legal action because there are pretty strict statue of limitations on those – in many states only providing one year from the date of publication.  When Jane learns of the posting, maybe she knows who John is by what was said in the posting – and maybe she isn’t sure who posted it – but either way chances are she is likely going to seek legal help to learn more about her options.  Many people in Jane’s position will want to threaten to sue the website…but it’s actually not that simple.  Why?  Because unless the website actually contributed to writing the stuff, which they most likely didn’t, then they can’t be held liable for that content.  That’s the law here in the United States – the Communications Decency Act.  Fortunately, while online defamation is a niche area of law, there are many attorneys who are well versed in online defamation around the country that are able to assist people who find themselves in this kind of a situation.

So by now you are probably wondering how in the world a defamed party and a website could both be standing in the same boat.  I promise I am getting there but I felt the need to walk through this story for the benefit of those who don’t work in this field and have little to no clue what I am even talking about.  Baby steps…I’m getting there.

A FIGHT FOR REMOVAL:  As I pointed out in the beginning, arguably under the law, websites don’t have to remove the content even if it is found by a court or otherwise to be false and defamatory and that leaves plaintiffs in an awkward position.  They want the information taken down from the internet because it’s alleged to be harmful.  What can be done all depends on the website the content is on.

REPUTATION MANAGEMENT:  Many people think that reputation management is the way to go.  However, while reputation management can be helpful in some instances, and I’m not trying to knock those legitimate companies out there that can definitely help a company with increasing their advertising and image online, many find it only to be a temporary band-aid when trying to cover up negativity.  Similarly, in some cases, some reputation management companies may employ questionable tactics such as bogus DMCAs or fake Court Orders.  Yes, those situations are real – I actually just presented on that topic to a group of internet lawyers less than two months ago and I caution anyone who is using or considering a reputation management company that guarantees removal of content from the internet.

A WEBSITE’S INTERNAL POLICING:  The same law that protects websites from liability for third-party content is the same law that encourages self policing by providing for editorial discretion on what to post and not post.  As such, some websites have taken their own proactive approach  and created their own internal policing system where, depending on the circumstances and what was written, the website might find that the posting violated their terms of service and, within their discretion, take some sort of action to help a victim out.  Not every website has this but it’s certainly worth checking into.

COURT ORDERS:  Remember, a website, arguably per the law, doesn’t necessarily have to take a posting down regardless of what the court order says.  Shocking, but this has been found to be true in many cases around the country.  So what do websites do?  Here are a few scenarios on how websites might consider a court order:

  • Some websites will, without question, accept a court order regardless of jurisdiction and remove content – even if it is by default which can mean that the defendant didn’t appear and defend the case.  It’s worth while to note that some people won’t appear and defend because: 1) they never got notice of the lawsuit in the first place; 2) they didn’t have the knowledge to fight the case themselves; and 3) they didn’t have the resources to hire an attorney to fight the case – let’s face it – good lawyers are expensive!  Even cheap lawyers are still expensive.
  • Some websites will remove a posting only if there is some sort of evidence that supported the court order – like the defendant appeared and agreed to remove or even if there is a simple affidavit by the author who agrees that the information is false and is willing to remove it.
  • Some websites will only redact the specific content that has been found to be false and defamatory by the court based on evidence.  This means that whatever opinions or other speech that would be protected under the law, such as the truth, would remain posted on the website.
  • And still, other websites won’t event bother with a court order because they are out of the country and/or just don’t give a crap.  These types of websites are rumored to try and get people to pay money in order for something to be taken down.

COURT ORDER WHACK-A-MOLE WITH SEARCH ENGINES LIKE GOOGLE:  One of the biggest trends is to get a court order for removal and send it in to search engines like Google for de-indexing.  What de-indexing does is it removes the specific URL in question from the search engine’s index in that particular country.  I make this jurisdictional statement because countries in the European Union have a “Right to be Forgotten” law and search engines like Google are required to remove content from searches stemming from Europe but, that is not the law in the US.  The laws are different in other countries and arguably, Google doesn’t have to remove anything from their searches in the US.  Going back to our story with John, Jane and company XYZ, if Jane manages to litigate the matter and get a court order for the removal of the URL to the posting from search engine index, then, in theory, Jane’s name or company wouldn’t be associated with the posting.

Now this all sounds GREAT, and it seems to be one of the better solutions employed by many attorneys on behalf of their clients, BUT there are even a few problems with this method and it becomes a game of legal whack-a-mole:

  1. A website could change the URL which would toss it back into the search engine’s index and make it searchable again.  The party would either have to get a new court order or, at least, submit the court order again to the search engine with the new court order.
  2. If sending the Court Order to Google, Google will typically post a notice to their search results that a search result was removed pursuant to a court order and give a link to the Lumen Database where people can see specifically what URLs were removed from their index and any supporting documentation.  This typically includes the court order which may, or may not, include information relating to the offending content, etc.  Anyone can then seek out the court case information and, in many cases, even pull the subject Complaint from online and learn exactly what the subject report said and learn whether or not the case was heard on the merits or if the case was entered by default or some other court related process.  Arguably, the information really isn’t gone fore those who are willing to do their homework.
  3. The first amendment and many state privilege laws allow the press, bloggers, etc. to make a story out of a particular situation so long as they quote exactly from a court record.  No doubt a court record relating to defamation will contain the exact defamatory statements that were posted on the internet.  So, for example, any blogger or journalist living in a jurisdiction that recognizes the privilege law, without condition on defamation, could write a story about the situation, post the exact content verbatim out of the court record as part of their story, and publish that story online, inclusive of the defamatory content, without liability.

The up-hill battle made WORSE by content scrapers.

With all that I have said above, which is really just a 10,000 foot view of the underlying jungle, poor Jane in my example has one heck of an up-hill battle regarding the defamatory content.  Further, in my example, John only posted on one review website.   Now enter the content scrapers who REALLY muck up the system causing headache for authors, for defamation plaintiffs, and for website providers like review websites.

CONTENT SCRAPERS:  When I say “content scrapers,” for the purpose of this blog article, I am referring to all of these new “review websites” that are popping up all over who, to get their start, appear to be systematically scraping (stealing) the content of other review websites that have been around for a long time and putting it on their own websites.  Why would anyone do this you ask?  Well, I don’t know exactly but I could surmise that it: 1) content helps their rankings online which helps generate traffic to their websites; 2) traffic to a website helps bring in advertising dollars to the ads that are running on their websites; and 3) if they are out of country (which many appear to be outside of the United States) they don’t really give a crap and can solicit money for people who write and ask for content to be taken down.  I sometimes refer to these websites as copycat websites.

CONTENT SCRAPERS CAUSE HEADACHES FOR AUTHORS:  Many people have their favorite review website that they turn to to seek out information on – be it Yelp for reviews on a new restaurant they want to try, TripAdvisor for people’s experience with a particular hotel or resort, or any other online review websites…it’s a brand loyalty if you will.  An author has the right to choose which website they are willing to post their content on and, arguably, that decision could be based in part on the particular website’s Terms of Service as it would relate to their content.  For example, some websites will allow you to edit and/or remove content that you post while other websites will not allow you to remove or edit content once it is posted.  I’d like to think that many people look  to see how much flexibility is provided with respect to their content before they chose which forum to post it on.

When a copycat website scrapes/steals content from another review website they are taking away the author’s right to choose where their content is placed.  Along the same lines, the copycat websites may not provide an author with the same level of control over their content.  Going back to my John, Jane and XYZ example, if John posted his complaint about Jane on a website that allowed him to remove it at his discretion, it’s entirely possible that a pre-litigation settlement could be reached where John voluntarily agreed to remove his posting or, John decided to do so on his own accord after he cooled down and realized he made a big mistake posting the false and defamatory posting about Jane online.  However, once a copycat website steals that content and places it on their website, John not only has to argue over whether or not he posted the content on another website but also may not be able to enter into a pre-litigation settlement or remove it at his own direction.  In fact, there is a chance that the copycat website will demand money in order to take it down – and then, who knows how long it will even stay down.  After all the copycat website doesn’t care about the law because stealing content is arguably copyright infringement.

CONTENT SCRAPERS CAUSE HEADACHE FOR DEFAMATION PLAINTIFFS:  As discussed within this article, defamation plaintiffs have an up-hill battle when it comes to pursuing defamation claims and trying to get content removed from the internet.  It almost seems like a losing battle but that appears to be the price paid for keeping the freedom of speech alive and keeping a level of transparency.  Indeed, there is value to not stifling free speech.  However, when people abuse their freedom of speech and cross the line online, such as John in my example, it makes life difficult for plaintiffs.  It’s bad enough when people like John post it on one website, but when a copycat website then steal content from other review websites, and post it to their website(s), the plaintiff now has to fight the battle on multiple grounds.  Just when a plaintiff will make headway with the original review website the stolen content will show up on another website.  And, depending on the copycat website’s own Terms of Service, there is a chance that it won’t come down at all and/or the copycat website will demand money to have the content, that they stole, taken down.  Talk about frustrating!

CONTENT SCRAPERS CAUSE HEADACHE FOR REVIEW WEBSITES:  When it comes to online review sites, it’s tough to be the middle man…and by middle man I mean the operator of the review website.  The raging a-holes of the world get pissed off when you don’t allow something “over the top” to be posted on their website and threaten to sue – arguing you are infringing on their first amendment rights.  The alleged defamation victims of the world get pissed off when you do allow something to get posted and threaten to sue because well – they claim they have been defamed and they want justice.  The website operator gets stuck in the middle having zero clue who anyone is and is somehow supposed to play judge and jury to thousands of postings a month?  Not that I’m trying to write myself out of a job but some of this stuff gets REALLY ridiculous and some counsel are as loony as their clients.  Sad but true.  And, if dealing with these kinds of issues wasn’t enough, enter the exacerbators, i.e, the copycat websites.

To begin with, website operators that have been around for a long time have earned their rankings.  They have had to spend time on marketing and interacting with users and customers in order to get where they are – especially those that have become popular online.  Like any business, a successful one takes hard work.  Copycat websites, who steal content, are just taking a shortcut to the top while stepping on everyone else.  They get the search engine ranking, they get the advertising dollars, and they didn’t have to do anything for it.  To top it off, while the algorithms change so often and I am no search engine optimization (SEO) expert, I suspect that many of the original websites may see a reduction in their own rankings because of the duplicative data online.  Reduced rankings and traffic may lead to a reduction in revenue.

I like to think that many website operators try hard to find a happy medium between freedom of speech and curtailing over the top behavior.  That’s why websites have terms of service on what kind on content is allowed and not allowed and users are expected to follow the rules.  When a website operator learns of an “over the top” posting or other situation that would warrant removal or redaction, many website operators are eager to help people.  What is frustrating is when a website feels like they are helping a person only to get word days later that the same content has popped up elsewhere online – meaning a copycat website.  In some instances people wrongly accuse the original website for being connected to the copycat website and the original website is left to defend themselves and try to convince the person their accusations are inaccurate.  There is the saying of “no good deed goes unpunished” and I think that it is true for website operators in that position.

As the new-age saying goes “The Struggle is Real!”

I don’t know what the solution is to all of these problems.  If you have kept up with this Fighting Fair on the Internet blog series that I have been working on over the past year, you know that I REALLY disapprove of people abusing the internet.  I support the freedom of speech but I also think that the freedom of speech shouldn’t give one a license to be a-hole either.  I don’t know that there is a bright line rule for what content should and should not be acceptable…but as Supreme Court Justice Potter Stewart said in Jacobellis v. Ohio back in 1964 to describe his threshold for obscenity, “I know it when I see it.”  For me, after having seen so much through work and just in my own personal life, I think that is true.  My hope is that if I keep talking about these issues and hosting educational seminars and workshops in effort to raise awareness perhaps people may join my mission.  I firmly believe that we can ALL do better with our online actions…all we need is a little education and guidance.

Until next time friends…

 

Fighting Fair on the Internet: Part 1 | The Internet Sucks!

Okay, so I know that the title “The Internet Sucks!” is rather harsh, but lately that is how I feel.  There was once a time where the internet was used as an actual tool and not a weapon.  I recognize that to a great degree it still a tool because we can share thoughts, ideas, and solid information and we are all the wiser for it.  No longer do we have to go to the library to look things up or wait a year for something to be published.  Now, everything is at our fingertips within seconds and from an educational perspective, this is an awesome thing!  Even from the perspective of being able to share meaningful thoughts and ideas in a collaborative environment makes the internet awesome, especially when it is used for good and positive.  Of course it has also helped us reconnect and stay connected with friends and family who live across the globe…and for me I am thankful to have such opportunity.  Yes, there are countless reasons why the internet is still good – but that’s not what I am talking about – otherwise this would be a short posting about puppies, baby goats, and kittens.  What I am referring to is the other side of that coin…

As I scroll through all of the social media pages that are out there, reading the different postings regarding…well, just about anything someone happens to write about, I find myself being ever thankful that I grew up in a time when the internet wasn’t so poplar.  It seems that the information highways has become the “misinformation highway” and so many have become quick to believe and consequently “like” and “share” just about anything that is posted…no matter how ridiculous it would seem to anyone who actually stopped and thought about what they were reading for a minute.  Mainstream media wants so badly to draw attention that they will highlight situations that really shouldn’t be highlighted, and then often skew them, because it does nothing more than “stir the pot” and generate ratings.  I have often said those that “stir the post” should have to lick the spoon.  Top that off with the keyboard warriors of today who seem to thrive on being malicious turds and you come to realize that the internet has really become a hostile environment and people are legitimately suffering from it in many different forms.  Someone can’t even post a picture of a puppy without someone saying “that is the ugliest puppy I have ever seen” and go on to get into it with someone else over that comment.  who gives a crap if you think the puppy is ugly?  Why does your opinion on that matter?   Don’t get me wrong, I am all for the freedom of speech (and as a lawyer in my line of work I help advocate for it), however, just because something is legal doesn’t mean that you should push the boundaries just to say you could do it!  Freedom of speech shouldn’t be used as a license to be a dick!  At what point did people bypass the Golden Rule?  Further, and on point, not everything that you say (or write) is protected speech…but so many people forget that or have apparently never been taught that lesson in school.  In my best Mr. Mackey voice from South Park “Bullying is bad…mmmmkay.  Harassing someone is bad…mmmmkay.  Lying and making up stories is bad…mmmmkay. Sure there are exceptions – satire and the like…and that seems all pretty self explanatory to me…but perhaps what I consider common sense isn’t so common?

While the shift has been going on for some time it has only been in the last five years that I have really noticed the change.  Perhaps because I now deal with on a daily basis whether it be for work or I have it thrown in my face every time I read any thread, on any post, on pretty much any topic.  True, I could not read…but the inquisitive social scientist mind I have won’t allow me to simply just dismiss it.  As I see it, there seems to be a drastic increase of people who literally take offense to everything.  At the same time there is an equally drastic increase of people who think being a keyboard warrior troll is somehow productive and funny; and somewhere in the gap between the two extremes are those who can find a bit of humor in some good old fashioned ribbing but know when things have gone too far and won’t engage in those activities.  You know they types that I am I am talking about.  I'm just here for the commentsThey are the ones who literally post the “I’m just here for the comments” meme to a thread to show some level of participation without taking a side…  Why is that?  How has all of this come to be?  Why does everyone want websites that allow third-party content to be the “moral police”?  Even if sites were to start being the “moral police” where does one draw the line in the sand?  Shouldn’t society, as a whole, have a duty to raise awareness and police their own conduct?  Is it a fruitless endeavor to try and get people to police their own conduct or do people generally desire to behave in a positive manner but are just lacking in some basic knowledge and tools for real dispute resolution in today’s technological world?  I mean, let’s face it…it’s not like many of us growing up had parents in this particular environment to draw upon for examples of how to handle these kinds of situations; heck, the game Oregon Trail was considered cool technology I was young let alone the internet.

Through this series of blogs under my self titled topic “Fighting Fair on the Internet” I will discuss my personal viewpoints on these questions in a balanced approach in hopes to help raise awareness on these issues; offer discussion points and/or, at least, some food for thought on the related issues; and provide some general legal commentary and tips for what I call “fighting fair on the internet” along the way.  Of course, while I have some level of education in the social sciences, I certainly do not claim to be an expert…but I am fascinated by human nature and it seems to be such a very relevant and current issue in which I have had some level of experience with.  Stick around friends…I anticipate this is going to be an interesting ride!

Cheers!

Anette