E-Commerce: Pros and Cons of Drop-Shipping

Drop-shipping is a popular business model in e-commerce where a retailer does not hold inventory but instead transfers customer orders and shipment details to a supplier or wholesaler, who then ships the products directly to the customer. While drop-shipping offers several advantages, it also comes with certain challenges. Let’s explore the pros and cons of drop-shipping:

Pros of Drop-shipping:

  1. Low Startup Costs: Drop-shipping eliminates the need to invest in inventory, warehousing, and shipping infrastructure, making it a cost-effective option for entrepreneurs with limited capital.
  2. Wide Product Range: As drop-shippers rely on suppliers with extensive inventories, they can offer a wide range of products without managing physical stock.
  3. Flexibility and Scalability: Drop-shipping allows businesses to test various product lines and adjust their offerings quickly based on customer demand. It also offers scalability without the limitations of managing inventory.
  4. Location Independence: Drop-shipping businesses can be managed from anywhere, as long as there is an internet connection. This allows for greater flexibility in work arrangements.
  5. Reduced Risk: Since the drop-shipper does not pre-purchase products, they are not exposed to the risk of unsold inventory or slow-moving goods.

Cons of Drop-shipping:

  1. Lower Profit Margins: Drop-shipping often involves lower profit margins compared to traditional retail models due to the additional costs involved in paying suppliers and shipping fees.
  2. Inventory and Product Quality Control: Drop-shippers rely on third-party suppliers for inventory, which can lead to challenges in monitoring product quality, availability, and shipping times.
  3. Order Fulfillment Challenges: As drop-shipping involves multiple parties, there is a risk of miscommunication and delays in order fulfillment, potentially leading to customer dissatisfaction.
  4. Branding and Customer Relationships: Drop-shippers have limited control over packaging, branding, and customer experience, which may impact their ability to build a strong brand and lasting customer relationships.
  5. Competitiveness: The low barriers to entry in drop-shipping can lead to increased competition, making it challenging to stand out in a crowded market.
  6. Supplier Reliability: The success of drop-shipping heavily relies on the reliability and efficiency of suppliers. If a supplier fails to deliver on time or experiences stock shortages, it can directly impact the drop-shipper’s business. Depending on your vendor’s terms of service, you may have little recourse for errors to boot.
  7. Profit Margin Compression: As more retailers enter the drop-shipping space, suppliers may increase product prices or charge additional fees, leading to reduced profit margins for the drop-shipper.

In conclusion, drop-shipping offers a flexible and cost-effective way to start an e-commerce business, but it also comes with inherent challenges. Entrepreneurs considering drop-shipping should carefully weigh the pros and cons to determine if this business model aligns with their goals and capabilities. Proper supplier selection, effective communication, and a focus on providing a positive customer experience can help mitigate some of the challenges associated with drop-shipping.

Disclaimer: This information is intended to be general advice and should not be relied upon as formal legal advice. If you are looking for legal advice as to your particular situation, please contact an attorney in your jurisdiction. If you’re located within the state of Arizona, consider contacting Beebe Law, PLLC.

Ninth Circuit says COPPA does not preempt state law claims – Jones v. Google

In this case, a class of children (“Children”), represented by their parents and guardians, filed a lawsuit against Google LLC, YouTube LLC, and several other companies, alleging violations of the Children’s Online Privacy Protection Act (COPPA). The Children claimed that Google used persistent identifiers (via targeted advertising) to collect data and track their online behavior without their consent, which violated state law and COPPA regulations. More specifically, Children are seeking “damages and injunctive relief, asserting only state law claims: invasion of privacy, unjust enrichment, consumer protection violations, and unfair business practices, arising under the constitutional, statutory, and common law of
California, Colorado, Indiana, Massachusetts, New Jersey, and Tennessee. The parties agree that all of the claims allege conduct that would violate COPPA’s requirement that child directed online services give notice and obtain “verifiable parental consent” before collecting persistent identifiers.” Google argued that it wasn’t subject to COPPA because YouTube is a “platform for adults” even though it knows that children use the platform. [Editor’s note: That sure does seem like a stretch of an argument given just how much content directed at children there is on that platform.]

The district court dismissed the case, citing preemption grounds (that is that the state law claims were preempted by COPPA, a federal regulation), but the Ninth Circuit Court of Appeals reversed the dismissal in an amended opinion.

The Court first considered whether COPPA preempted state law claims that were based on the same conduct prohibited by COPPA. The court noted that: “[e]xpress preemption is a question of statutory construction. COPPA’s preemption clause provides: ‘[n]o State or local government may impose any liability . . . that is inconsistent with the treatment of those activities or actions under this section.‘ 15 U.S.C. § 6502(d).” (emphasis in original) The court determined that state laws that supplement or require the same as federal law are not inconsistent and do not stand as an obstacle to Congress’s objectives. Thus, the court concluded that COPPA’s preemption clause does not bar state-law causes of action that are parallel to, or proscribe the same conduct forbidden by, COPPA.

The court also addressed conflict preemption, which occurs when state law conflicts with a federal statute. They found that conflict preemption did not apply in this case because the state law claims did not prevent or frustrate the accomplishment of COPPA’s federal objectives.

As a result, the Ninth Circuit reversed the district court’s dismissal of the case on preemption grounds and remanded it for further consideration of other arguments for dismissal.

Citation: Jones v. Google, LLC, Case No. 21-16281, 9th. Cir. (Jul. 13, 2023)

Disclaimer 1: This summary was initially generated by ChatGPT and then edited to include more specific information by a real human … because, you know, humans are still better than the machine tool.

Disclaimer 2: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

The Temptation of ChatGPT for Legal Contracts: Why Human Expertise Reigns Supreme

Disclaimer: This article, while reviewed and slightly edited by a real live human prior to publication, was initially drafted by ChatGPT. Even ChatGPT knows its own limitations.

In this digital age, where technology continues to advance at a rapid pace, it’s no surprise that businesses and individuals seek innovative solutions for various tasks, including legal contract creation. With the rise of AI-powered language models like ChatGPT, one might be tempted to rely on them for generating legal contracts quickly and conveniently. However, while ChatGPT and similar tools offer impressive capabilities, there are significant reasons why they fall short when it comes to formal legal contract creation.

Understanding the Temptation

ChatGPT, with its ability to generate coherent and contextually relevant text, can be alluring for those seeking a quick solution for legal contract drafting. The convenience of inputting prompts and receiving instant responses may seem enticing, especially for individuals who are not well-versed in legal language or lack the resources for professional legal assistance. The prospect of saving time and money might make ChatGPT an appealing choice at first glance.

The Limitations of ChatGPT

  1. Lack of Contextual Understanding: While ChatGPT excels in understanding and generating text based on provided prompts, it lacks the ability to truly comprehend the nuances of legal contracts and their specific legal implications. It lacks a deep understanding of legal concepts, precedents, and regulations that are crucial for creating enforceable and comprehensive contracts.
  2. Legal Accuracy and Updates: Legal landscapes are dynamic, with laws, regulations, and court rulings subject to change. ChatGPT’s training data might not encompass the most up-to-date legal information, potentially leading to inaccuracies or outdated clauses in generated contracts. Attorneys stay abreast of legal developments and ensure that contracts align with current laws and regulations.
  3. Tailored and Specific Legal Advice: Legal contracts require a personalized touch to address the unique needs and circumstances of each client. ChatGPT, while proficient in generating text, cannot provide the tailored legal advice and expertise that an attorney can offer. Attorneys can carefully analyze a client’s situation, identify potential risks, and customize contracts accordingly.
  4. Complex Legal Language: Legal contracts often utilize specialized terminology and language that carry precise legal meanings. ChatGPT may not fully grasp the intricate nuances and subtleties of legal language, potentially resulting in ambiguous or poorly drafted provisions that could be exploited or lead to disputes.
  5. Confidentiality and Security: Legal contracts often involve sensitive and confidential information. Sharing such information with a third-party AI model might raise concerns regarding data privacy and security. Working with a trusted attorney ensures the confidentiality and protection of sensitive information.

The Indispensable Role of Human Expertise

While technology can undoubtedly enhance various aspects of our lives, legal contract creation necessitates the expertise, experience, and ethical judgment that only human attorneys can provide. Attorneys possess the legal knowledge, contextual understanding, and analytical skills required to create contracts that mitigate risks, protect client interests, and ensure compliance with applicable laws.

By engaging an attorney for legal contract creation, businesses and individuals can benefit from:

  1. Tailored Advice: Attorneys can assess unique circumstances, identify potential risks, and provide advice tailored to specific needs, ensuring contracts align with individual goals and requirements.
  2. Legal Compliance: Attorneys stay updated on legal changes and ensure that contracts adhere to current laws and regulations, reducing the risk of non-compliance and legal disputes.
  3. Clarity and Precision: Attorneys are skilled in crafting precise and unambiguous contract language, minimizing the potential for misinterpretation and reducing the likelihood of future disagreements.
  4. Risk Mitigation: Attorneys understand the potential risks associated with different types of contracts and can draft provisions that protect clients from liabilities and unforeseen circumstances.
  5. Dispute Resolution: In the unfortunate event of a contract dispute, attorneys provide legal representation and guidance, leveraging their expertise to achieve favorable outcomes through negotiation, mediation, or litigation.

While ChatGPT and similar AI language models have their merits, they cannot replace the indispensable role of human attorneys in the creation of formal legal contracts. The complexities, legal nuances, and individual circumstances involved in contract drafting necessitate the knowledge, experience, and personalized advice that only human legal professionals can provide. By seeking the guidance of an attorney, individuals and businesses can ensure the creation of enforceable, comprehensive, and customized contracts that protect their interests and mitigate legal risks.

[EDITOR NOTE: Look, we get it. Everyone wants a faster way to prepare content. Lawyers are no exception. That said, it’s important to understand the difference between when ChatGPT can be a useful tool, and when it’s best to have a set of trained legal eyes looking at and thinking about something. Legal documents are not the area where you want to cut corners. If you do, there is a good chance that you will be paying a lawyer to deal with a ChatGPT mess up … because, you know, ChatGPT makes up fake law and stuff. Don’t bet the company on ChatGPT … at least not yet.]

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

SCOTUS declines to rule on Section 230, again. – Gonzalez v. Google

The widely industry watched nail biter of a case, Gonzalez v. Google, has been ruled upon by the Supreme Court of the United States. Many advocates of Section 230 thought for sure that SCOTUS would ruin the application of Section 230 as we know it, however, that didn’t happen. Much to the dismay of many critics of Section 230, SCOTUS (and rightfully so under the facts of this case in my opinion) kicked the can on the issue of Section 230 and declined to address the question.

CASE SUMMARY:

In this case, the parents and brothers of Nohemi Gonzalez, a U.S. citizen killed in the 2015 coordinated terrorist attacks in Paris, sued Google, LLC under 18 U.S.C. §§2333(a) and (d)(2). They alleged that Google was directly and secondarily liable for the attack that killed Gonzalez. The secondary-liability claims were based on the assertion that Google aided and abetted and conspired with ISIS through the use of YouTube, which Google owns and operates.

The District Court dismissed the complaint for failure to state a claim but allowed the plaintiffs to amend their complaint. However, the plaintiffs chose to appeal without amending the complaint. The Ninth Circuit affirmed the dismissal of most claims, citing Section 230 of the Communications Decency Act, but allowed the claims related to Google’s approval of ISIS videos for advertisements and revenue sharing through YouTube to proceed.

The Supreme Court granted certiorari to review the Ninth Circuit’s application of Section 230. However, since the plaintiffs did not challenge the rulings on their revenue-sharing claims, and in light of the Supreme Court’s decision in Twitter, Inc. v. Taamneh, the Court found that the complaint failed to state a viable claim for relief. The Court acknowledged that the complaint appeared to fail under the standards set by Twitter and the Ninth Circuit’s unchallenged holdings. Therefore, the Court vacated the judgment and remanded the case to the Ninth Circuit for reconsideration in light of the Supreme Court’s decision in Twitter. [Author Note: If you listen to the oral argument, you’d see just how weak of a case was brought by Plaintiff].

In summary, the Supreme Court did not address the viability of the plaintiffs’ claims but indicated that the complaint seemed to fail to state a plausible claim for relief, and therefore, declined to address the application of Section 230 in this case. The case was remanded to the Ninth Circuit for further consideration.

DISCLAIMER & OTHER POINTS:

I’m currently sitting at the Tenth Annual Conference on Governance of Emerging Technology and Science. There is a lot of talk about AI, including ChatGPT. Because the Gonzalez opinion was so incredibly short by comparison, I thought I would test out ChatGPT’s ability to summarize this case. Having followed this case, and read the SCOTUS opinion myself, I was quite surprised with summary that it spit out, which is what you just read above. For those that want to read the case opinion for yourself (it’s only three pages) you can review the SCOTUS opinion linked to below. I’ve also included the link to the Twitter case as well (which is a more typical 38 page opinion). In case you are curious, I also asked ChatGPT to summarize the Twitter case, however, there is some sort of character limit as I received an error message about the request being too long. We’re all learning.

Citation: Gonzalez v. Google, 598 U.S. ___ (May 18, 2023)

Citation: Twitter v. Taamneh, 598 U.S. ___ (May 18, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

Section 230 Protects Users Too – Monge v. Univ. of Pa.

One of the rarely discussed points, in the grand scheme of Section 230 chatter anyway, is the fact that Section 230 not only protects various interactive internet platforms, but it also protects users, just like you and me, on the various Internet platforms from third-party content. For example, if you’re an administrator/moderator of some random Facebook group … generally speaking, Section 230 protects you from legally actionable content that other users post in that group. Just like the interactive Internet platforms, you, as a user of the platform, also get some protections. This is also true, as this case will underscore, if you share an article via email that is alleged to be defamatory. Given the ease and frequency that people like to share information that they don’t necessarily read let alone fact check for themselves, you’d think this would be front and center in more discussions when trying to teach people that Section 230 isn’t all about protecting “big tech”.

To be clear, the fact that Section 230 protects users too isn’t just something determined by the courts through case law, but is something actually spelled out right in the language of the statute itself.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

47 U.S.C. § 230(c)(1) (emphasis of bold and italics added)

The below information is based upon the information provided in the court opinion. I have no independent knowledge about the facts of this case.

Plaintiff: Janet Monge

Defendant: University of Pennsylvania, Deborah Thomas, et al.

HIGH LEVEL OVERVIEW

A University of Pennsylvania faculty member, Dr. Deborah Thomas, shared an article that allegedly defamed Dr. Monge via a list serve to an organization that Dr. Monge is a member of. Obviously upset about the situation Dr. Monge filed a lawsuit for claims of defamation, defamation by implication, false light, and civil aiding and abetting against defendants, including Dr. Thomas. Dr. Thomas filed a Fed. R. Civ. P., Rule 12(b)(6) motion to dismiss for failure to state a claim for which relief can be granted arguing that 47 U.S.C. § 230(c)(1) immunizes her from liability. The court agreed with Dr. Thomas and dismissed the action, with prejudice, i.e., dismissed those claims permanently.

THE LEGAL WEEDS

It is almost funny to say “legal weeds” here because this was an easy Section 230 win. The Court stated “[c]ourts analyzing and applying the CDA have consistently held that distributing, sharing, and forwarding content created and/or developed by a third party is conduct immunized by the CDA” and then goes on to cite six cases supporting this position relating to content that was shared in an internet chat room, via email, and other technologies. The Court similarly dismissed Dr. Monge’s “material contribution” argument, suggesting that Dr. Thomas materially contributed to the alleged defamatory statements by including her own commentary in the email forwarding the articles. The Court’s rationale was that Dr. Thomas “did not add anything new to the articles, or materially modify them, when she shared them via email, so she did not materially contribute to the alleged defamation. The Court again cited to multiple cases supporting this point. Based upon these points, the Court rightfully concluded “Dr. Thomas’s conduct of sharing the allegedly defamatory articles via email is immune from liability under the CDA.”

SUMMARY OF THOUGHTS

As mentioned before, this was an easy Section 230 win. Ironically, this is also one of those instances where you see a Plaintiff upset about the content of something end up making the matter a bigger deal by filing a lawsuit where now you have legal academics talking about the situation which is a prime example of the Streisand effect. I can understand in general why Plaintiffs want to set the record straight when they believe false information has been put out about them. On the other hand, if you’re going to take it to court, it is important to realize that such action often shines a lot of light on an issue that you might rather have just kept to a smaller audience.

Citation: Monge v. Univ. of Pa., Case No. 22-2942 (E.D. Pa. March 10, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

Newsweek’s 12(b)(6) failed in defamation case – Boone v. Newsweek

I have never understood the point of news publications including images in their articles/publications that aren’t actually story or incident related. This is especially true when they are reporting critically about specific individuals. I get it, images help with clicks and drawing attention but that’s what stock photos or massively cropped photos are for. Nevertheless, according to the statements in the Court’s opinion, Newsweek made a decision to do just that which resulted in, without surprise, a defamation and false light case against them.

The below information is based upon the information provided in the court opinion. I have no independent knowledge about the facts of this case.

Plaintiff: James Boone

Defendant: Newsweek, LLC, et al. (related entities)

HIGH LEVEL OVERVIEW

Newsweek, an online news organization, published a story about a a police officer who was accused of racially profiling a man in a restaurant. Rather than posting an image of the officer that was being accused of the profiling, Newsweek chose to embed a photo of a police officer, who was apparently identifiable by partial face, nametag and badge number, that had nothing to do with the headline and story. Allegedly, as a consequence, Boone and his family received texts, emails and messages via social media inquiring about the article under the impression that he was involved resulting in Boone having to seek police protection. Boone’s lawyer wrote to Newsweek alerting them to the issue and asked that “appropriate measures [be taken] to mitigate the harm.” For whatever reason, Newsweek apparently didn’t respond. Consequently, Boone filed a lawsuit against Newsweek for defamation and false light in the United States District Court, Eastern District of Pennsylvania.

Newsweek then filed a motion to dismiss, under Fed.R.Civ.P., Rule 12(b)(6) [failure to state a claim] arguing that Boone failed to plead enough facts to support a reasonable inference that Newsweek acted with “actual malice.”

A LITTLE INTO THE LEGAL WEEDS

In this particular instance, Boone is considered a public-figure. “To prevail on a defamation case, the First Amendment requires that public-figure plaintiffs plead-and later, prove, that the defendant acted with ‘actual malice.'” Contrary to most lay persons belief, and as the court explains “‘[a]ctual malice’ is a term of art that does not connotate ill will or improper motivation” rather it means that the “publisher acted ‘with knowledge that [the allegedly defamatory statement] was a false or with reckless disregard of whether it was false or not.” When breaking it down further, the term “reckless disregard” means “that the defendant in fact entertained serious doubts as to the truth of the statement or that the defendant had a subjective awareness of probable falsity.”

The “actual malice” standard is a pretty high bar to recovery. It can be even higher when you’re considering, as here, not a direct false statement but when there is alleged defamation by implication. In this instance “[Boone] has to show that [Newsweek] “either intended to communicate the defamatory meaning or knew of the defamatory meaning and was reckless in regard to it.” This inquiry is subjective in nature which requires that “some evidence showing, directly or circumstantially, that the defendants themselves understood the potential defamatory meaning of their statement.” Obviously, false implications are capable of being defamatory.

Here Boone would need to prove that “Newsweek knew the implication that Boone was involved in [the subject incident] or was reckless about that falsity” and that “Newsweek either intended to convey the false impression that Boone was involved in [the subject incident], or knew that publishing the photograph would likely convey the false impression that Boone was involved in [the subject incident] but recklessly published it anyway.” While the Court discusses some other things, the Court here took issue with the fact that Boone’s badge number and name tag were visible in the photograph. The Court stated:

“The fact that the photograph depicted Boone’s nametag and badge number therefore gives rise to a reasonable inference that Newsweek (1) knew that Boone was not involved in the [subject] incident or acted in reckless disregard of that fact, and (2) knew that publishing the photograph would likely convey the false impression that Boone was involved in the [subject] incident but recklessly published it anyway.”

Because there was a “reasonable inference that Newsweek acted with actual malice” the Court denied the motion to dismiss the defamation claim. With respect to the false light claim, which also requires the finding of actual malice, the Court similarly denied the motion to dismiss.

FINAL THOUGHTS

Defamation litigation can be part of the “cost of doing business” when you are in the news publication business. That said, this was, in my opinion, easily avoidable. I’m not sure if there was a failure to have full legal review done before the publication was published, or if someone didn’t take the demand letter from Boone’s attorney seriously … but Newsweek, with the limited information we’re presented with anyway, appears to have had two opportunities to avoid this litigation and they didn’t take advantage of either. The first would have been to train reporting and editing staff to not use unrelated images, especially of identifiable people, in news reporting efforts. This should be a no-brainer, but given how often I see that happen in news publications, I’m not surprised. The second would have been to acknowledge the mistake and simply change out the picture to something more appropriate … you know, like an image of the actual officer being accused in the article … when they received notice that there was an issue. Doubling down on something like this seems like an unnecessary risk … that has now resulted in costly litigation. Maybe Newsweek has a huge litigation budget … but even then, you’d think that they’d want to use it a little more wisely.

Citation: Boone v. Newsweek, LLC, Case No. 22-1601, E.D. Pa, Feb. 27, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

NY District Court Swings a Bat at “The Hateful Conduct Law” – Volokh v. James

This February14th (2023), Valentine’s Day, the NY Federal District Court showed no love for New York’s Hateful Conduct Law when it granted a preliminary injunction to halt it. So this is, to me, an exceptionally fun case because it includes not only the First Amendment (to the United States Constitution) but also Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’m also intrigued because renowned Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc. are the Plaintiffs. If Professor Volokh is involved, it’s likely to be an interesting argument. The information about the case below has been pulled from the Court Opinion and various linked websites.

Plaintiffs: Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc.

Defendant: Letitia James, in her official capacity as New York Attorney General

Case No.: 22-cv-10195 (ALC)

The Honorable Andrew L. Carter, Jr. started the opinion with the following powerful quote:

 “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.’”

Matal v. Tam, 137 S.Ct. 1744, 1764 (2017) 

Before we get into what happened, it’s worth taking a moment to explain who the Plaintiffs in the case are. Eugene Volokh (“Volokh”) is a renowned First Amendment law professor at UCLA. In addition, Volokh is the co-owner and operator of the popular legal blog known as the Volokh Conspiracy. Rumble, operates a website similar to YouTube which allows third-party independent creators to upload and share video content. Rumble sets itself apart from other similar platforms because it has a “free speech purpose” and it’s “mission [is] ‘to protect a free and open internet’ and to ‘create technologies that are immune to cancel culture.” Locals Technology, Inc. (“Locals”) is a subsidiary of Rumble and also operates a website that allows third party-content to be shared among paid, and unpaid, subscribers. Similar to Rumble, Locals also reports having a “pro-fee speech purpose” and a “mission of being ‘committed to fostering a community that is safe, respectful, and dedicated to the free exchange of ideas.” Suffice it to say, the Plaintiffs are no stranger to the First Amendment or Section 230. So how did these parties become Plaintiffs? New York tried to pass a well intentioned, but arguably unconstitutional, law that could very well negatively impact them.

On May 14th last year, 2022, some random racist nut job used Twitch (a social media site) to livestream himself carrying out a mass shooting on shoppers at a grocery store in Buffalo, New York. This disgusting act of violence left 10 people dead and three people wounded. As with most atrocities, and with what I call the “train wreck effect”, this video went viral on various other social media platforms. In response to the atrocity New York’s Governor Kathy Hochul kicked the matter over to the Attorney General’s Office for investigation with an apparent instruction to focus on “the specific online platforms that were used to broadcast and amplify the acts and intentions of the mass shooting” and directed the Attorney General’s Office to “investigate various online platforms for ‘civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” Apparently the Governor hasn’t heard about Section 230, but I’ll get to that in a minute. After investigation, the Attorney General’s Office released a report, and later a press release, that stated “[o]nline platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms” because an alleged “lack of oversight, transparency, and accountability of these platforms allows hateful and extremist views to proliferate online.” This is where one, having any knowledge about this area of law, should insert the facepalm emoji. If you aren’t familiar with this area of law, this will help explain (a little – we’re trying to keep this from being a dissertation).

Now no reasonable person will disagree that this event was tragic and disgusting. Humans are weird beings and for whatever reason (though I suspect a deep dive into psychology would provide some insight), we cannot look away from a train wreck. We’re drawn to it like a moth to a flame. Just look at any news organization and what is shared. You can’t tell me that’s not filled with “train wreck” information. Don Henley said it best in his lyrics in the 1982 song Dirty Laundry, talking about the news: “she can tell you about the plane crash with a gleam in her eye” … “it’s interesting when people die, give us dirty laundry”. A Google search for the song lyrics will give you full context if you’re not a Don Henley fan … but even 40 plus years later, this is still a truth.

In effort to combat the perceived harms from the atrocity that went viral, New York, on December 3, 2022 enacted The Hateful Conduct Law, entitled “Social media networks; hateful conduct prohibited.” What in the world does that mean? Well, the law applies to “social medial networks” and defined “hateful conduct” as: “[T]he use of a social media network to vilify, humiliate, incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law § 394-ccc(1)(a). Okay, but still ..

In explaining The Hateful Conduct Law, and as the Court’s opinion (with citations omitted) explains:

[T]he Hateful Conduct Law requires that social media networks create a complaint mechanism for three types of “conduct”: (1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence. This “conduct” falls within the law’s definition if it is aimed at an individual or group based on their “race”, “color”, “religion”, “ethnicity”, “national origin”, “disability”, “sex”, “sexual” orientation”, “gender identity” or “gender expression”.

The Hateful Conduct Law has two main requirements: (1) a mechanism for social media users to file complaints about instances of “hateful conduct” and (2) disclosure of the social media network’s policy for how it will respond to any such complaints. First, the law requires a social media network to “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” This mechanism must “be clearly accessible to users of such network and easily accessed from both a social media networks’ application and website. . . .” and must “allow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.” N.Y. Gen. Bus. Law § 394-ccc(2).

Second, a social media network must “have a clear and concise policy readily available and accessible on their website and application. . . ” N.Y. Gen. Bus. Law § 394-ccc(3). This policy must “include how such social media network will respond and address the reports of incidents of hateful conduct on their platform.” N.Y. Gen. Bus. Law § 394-ccc(3).

The law also empowers the Attorney General to investigate violations of the law and provides for civil penalties for social media networks which “knowingly fail to comply” with the requirements. N.Y. Gen. Bus. Law § 394-ccc(5).

Naturally this raised a lot of questions. How far reaching is this law? Who and what counts as a “social media network”? What persons or entities would be impacted? Who decides what is “hateful conduct”? Does the government have the authority to try and regulate speech in this way?

Two days before the law was to go into effect, on December 1, 2022, the instant action was commenced by the Plaintiffs alleging both facially, and as-applied, challenges to The Hateful Conduct Law. Plaintiffs argued that the law “violates the First Amendment because it: (1) is a content viewpoint-based regulation of speech; (2) is overbroad; and (3) is void for vagueness. Plaintiffs also alleged that the law is preempted by” Section 230 of the Communications Decency Act.

For the full discussion and analysis on the First Amendment arguments, it’s best to review the full opinion, however, the Court’s opinion opened with the following summary of its position (about the First Amendment as applied to the law):

“With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law § 394-ccc (“the Hateful Conduct Law” or “the law”). Yet, the First Amendment protects from state regulation speech that may be deemed “hateful” and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.”

With respect to the preemption argument made by Plaintiffs, that is that Section 230 of the Communications Decency Act preempts the law because it imposes liability on websites by treating them as publishers. As the Court outlines (some citations to cases omitted):

The Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). The Act has an express preemption provision which states that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3).

As compared to the section of the Opinion regarding the First Amendment, the Court gives very little analysis on the Section 230 preemption claim beyond making the following statements:

“A plain reading of the Hateful Conduct Law shows that Plaintiffs’ argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of “hateful conduct” and for failure to disclose their policy on how they will respond to complaints. N.Y. Gen. Bus. Law § 394-ccc(5). The law does not impose liability on social media networks for failing to respond to an incident of “hateful conduct”, nor does it impose liability on the network for its users own “hateful conduct”. The law does not even require that social media networks remove instances of “hateful conduct” from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.” (emphasis added)

Hold up sparkles. So the Court recognizes the fact that platforms cannot be held liable (in these instances anyway) for third-party content, no matter how ugly that content might be, but yet wants to force (punish in my opinion) a platform by forcing them to spend big money on development to create all these content reporting mechanisms, and set transparency policies, for content that they actually have no legal requirement to remove? How does this law make sense in the first place? What is the point (besides trying to trap them into having a policy that if they don’t follow could give rise to an action for unfair or deceptive advertising)? This doesn’t encourage moderation. In fact, I’d argue that it does the opposite and encourages a website to say “we don’t do anything about speech that someone claims to be harmful because we don’t want liability for failing to do so if we miss something.” In my mind, this is a punishment, based upon third-party content. You don’t need a “reporting mechanism” for content that people aren’t likely to find offensive (like cute cat videos). To this end, I can see why Plaintiffs raised a Section 230 preemption argument … because if you drill it down, the law is still trying to force websites to take an action to deal with undesirable third-party content (and then punish them if they don’t follow whatever their policy is). In my view, it’s an attempt to do an end run around Section 230. The root issue is still undesirable third-party content. Consequently, I’m not sure I agree with the Court’s position here. I don’t think the court drilled down enough to the root of the issue.

Either way, the Court did, as explained in the beginning, grant Plaintiff’s Motion for Preliminary Injunction (based upon the First Amendment arguments) which, at current, prohibits New York from trying to enforce the law.

Citation: Volokh v. James, Case No. 22-cv-10195 (ALC) (S.D.N.Y., Feb. 14, 2023)

DISCLOSURE: This is not mean to be legal advice nor should it be relied upon as such.

Pro Se’s kitchen sink approach results in a loss – Lloyd v. Facebook

The “kitchen sink approach” isn’t an uncommon complaint claim strategy when it comes to filing lawsuits against platforms. Notwithstanding decades of precedent clearly indicating that such efforts are doomed to fail, plaintiffs still give it the ole’ college try. Ironically, and while this makes more sense with pro se plaintiffs because they don’t have the same legal training and understanding of how to research case law, pro se plaintiffs aren’t the only ones who try it … no matter how many times they lose. Indeed, even some lawyers like to get paid to make losing arguments. [Insert the hands up shrug emoji here].

Plaintiff: Susan Lloyd

Defendants: Facebook, Inc.; Meta Platforms, Inc.; Mark Zuckerberg (collectively, “Defendants”)

In this instance Plaintiff is a resident of Pennsylvania who suffers from “severe vision issues”. As such, she qualified as “disabled” under the Americans with Disabilities Act (“ADA”). Ms. Lloyd, like approximately 266 million other Americans, uses the Facebook social media platform, which as my readers likely know, is connected to, among other things, third-party advertisements.

While the full case history isn’t recited in the Court’s short opinion, it’s worth while to point out (it appears anyway with the limited record before me at this time) that the Plaintiff was afforded the opportunity to amend her complaint multiple times as the Court cites to the Third Amended Complaint (“TAC”). According to the Court Order, the TAC alleged claims violations of:

Plaintiff alleged problems with the platform – suggesting it inaccessible to disabled individuals with no arms or problems with vision (and itemized a laundry list of issues that I won’t cite here … but suffice it to say that there was a complaint about the font size not being able to be made larger). [SIDE NOTE: For those that are unaware, website accessibility is a thing, and plaintiffs can, and will, try to hold website operators (of all types, not just big ones like Facebook) accountable if they deem there to be an accessibility issue. If you want to learn a little more, you can read information that is put out on the Beebe Law website regarding ADA Website Compliance.]

Plaintiff alleged that the advertisements on Facebook were tracking her without her permission … except that users agree to Facebook’s Terms of Service (which presumably allow for that since the court brought it up). I’m not sure at what point people will realize that if you are using something for free, you ARE the product. Indeed, there are many new privacy laws being put into place throughout various states (e.g., California, Colorado, Utah, Virginia and Connecticut) but chances are, especially with large multi-national platforms, they are on top of the rules and are ensuring their compliance. If you aren’t checking your privacy settings, or blocking tracking pixels, etc., at some point that’s going to be on you. Technology gives folks ways to opt out – if you can locate it. I realize that sometimes these things can be hard to find – but often a search on Google will land you results – or just ask any late teen early 20s person. They seem to have a solid command on stuff like this these days.

Plaintiff also alleged that Defendants allowed “over 500 people to harass and bully Plaintiff on Facebook.” The alleged allegations of threats by the other users are rather disturbing and won’t be repeated here (though you can review the case for the quotes). However, Plaintiff stated that each time that she reported the harassment she, and others, were told that it didn’t violate community standards. There is more to the story where things have allegedly escalated off-line. The situation complained about, if true, is quite unsettling … and anyone with decency would be sympathetic to Plaintiff’s concerns.

[SIDE NOTE: Not to suggest that I’m suggesting what happened, if true, wasn’t something that should be looked at and addressed for the future. I’m well aware that Facebook (along with other social media) have imperfect systems. Things that shouldn’t be blocked are blocked. for example, I’ve seen images of positive quotes and peanut butter cookies be blocked or covered from initial viewing as “sensitive”. On the other hand, I’ve also seen things that (subjectively speaking but as someone who spent nearly a decade handling content moderation escalations) should be blocked, that aren’t. Like clearly spammy or scammer accounts. We all know them when we see them yet they remain even after reporting them. I’ve been frustrated by the system myself … and know well both sides of that argument. Nevertheless, if one was to take into account the sheer volume of posts and things that come in you’d realize that it’s a modern miracle that they have any system for trying to deal with such issues at all. Content moderation at scale is incredibly difficult.]

Notwithstanding the arguments offered, the court was quick to procedurally dismiss all but the breach of contract claim because the claims were already dismissed prior (Plaintiff apparently re-plead the same causes of action). More specifically, the court dismissed the ADA and Rehabilitation claim because (at least under the 9th Cir.) Facebook is not a place of public accommodation under Federal Law. [SIDE NOTE: there is a pretty deep split in the circuits on this point – so this isn’t necessarily a “get out of jail free” card if one is a website operator – especially if one may be availing themselves to the jurisdiction of another circuit that wouldn’t be so favorable. Again, if you’re curious about ADA Website Compliance, check out the Beebe Law website]. Similarly, Plaintiff’s Unruh Act claim failed because the act doesn’t apply to digital-only website such as Facebook. Plaintiff’s fraud and intentional misrepresentation claims failed because there wasn’t really any proof that Facebook intended to defraud Plaintiff and only the Terms of Service were talked about. So naturally, if you can’t back up the claims, it ends up being a wasted argument. Maybe not so clear for Pro Se litigants, but this should be pretty clear to lawyers (still doesn’t keep them from trying). Plaintiff’s claims for invasion of privacy, negligence, and negligent infliction of emotional distress failed because they are barred by Section 230 of the Communications Decency Act, 47 U.S.C. § 230. Again, this is another one of those situations where decades of precedent contrary to a plaintiff’s position isn’t a deterrent from trying to advance such claims anyway. Lastly, the claims against Zuckerberg were dismissed because Plaintiff didn’t allege that he was personally involved or directed the challenged acts (i.e., he isn’t an “alter ego”).

This left the breach of contract claim. Defendants in this case argued that Plaintiff’s claim for breach of contract should be dismissed because the Court lacks diversity jurisdiction over the claim because she cannot meet the amount in controversy. As the Court explains, “28 U.S.C. §1332 grants federal courts’ original jurisdiction over civil actions where the amount in controversy exceeds $75,000 and the parties are citizens of different states.” Indeed, they are parties are from different states, however, that requirement that the amount in controversy is to exceed $75,000 is where Plaintiff met an impossible hurdle. As discussed prior, users of Facebook all agree to Facebook’s Terms of Service. Here, Plaintiff’s claim for breach of contract is based on conduct of third-party users and Facebook’s Terms of Service disclaim all liability for third-party conduct. Further, the TOS also provide, “aggregate liability arising out of.. .the [TOS] will not exceed the greater of $100 or the amount Plaintiff has paid Meta in the past twelve months.” Facebook having been around the block a time or two with litigation have definitely refined their TOS over the years to make it nearly impenetrable. I mean, never say never, BUT…good luck. Lastly, the TOS precludes damages for “lost profits, revenues, information, or data, or consequential, special indirect, exemplary, punitive, or incidental damages.” Based upon all of these issues, there is no legal way that Plaintiff could meet the required amount in controversy of $75,000. The Court dismissed the final remaining claim, breach of contract, without leave to amend, although the court did add in “[t]he Court expresses no opinion on whether Plaintiff may pursue her contract claim in state court.” One might construe that as a sympathetic signal to the Plaintiff (or other future Plaintiffs)…

There are a few takeaways from this case, in my opinion:

  1. Throwing garden variety kitchen sink claims at platforms, especially ones the size of Facebook, is likely to be a waste of ink on paper on top of the time it takes to even put the ink on the paper in the first place. If you have concerns about issues with a platform, engage the services of an Internet lawyer in your area that understands all of these things.
  2. Properly drafted, and accepted, Terms of Service for your website can be a huge shield from liability. This is why copying and pasting from some random whatever site or using a “one-size-fits-all” free form from one of those “do-it-yourself” sites is acting penny wise and pound foolish. Just hire a darn Internet lawyer to help you if you’re operating a business website. It can save you money and headache in the long run – and investment into the future of your company if you will.
  3. Website Accessibility, and related claims, is a thing! You don’t hear a lot about it because the matters don’t typically make it to court. Many of these cases settle based upon demand letters for thousands of dollars and costly remediation work … so don’t think that it can’t happen to you (if you’re operating a website for your business).

Citation: Lloyd v. Facebook, Inc., Case No. 21-cv-10075-EMC (N.D. Cal, Feb. 7, 2023)

DISCLAIMER: This is for general information only. This is not legal advice nor should it be relied upon as such. If you have concerns regarding your own specific situation, be sure to reach out to an attorney in your jurisdiction who may be able to advise you of your rights.

GoDaddy not liable for third-party snagging prior owned domain – Rigsby v. GoDaddy Inc.

This case should present as a cautionary tale of why you want to ensure you’ve got your auto-renewals on, and you’re ensuring the renewal works, for your website domains if you plan on using them long term for any purpose. Failing to renew timely (or ensuring there is actual renewal) can have unintended frustrating consequences.

Plaintiffs-Appellants: Scott Rigsby and Scott Rigsby Foundation, Inc. (together “Rigsby”).

Defendants-Appellees: GoDaddy, Inc., GoDaddy.com, LLC, and GoDaddy Operating Company, LLC and Desert Newco, LLC (together “GoDaddy”).

Scott Rigsby is a physically challenged athlete and motivational speaker who started the Scott Rigsby Foundation. In 2007, in connection with the foundation he registered the domain name “scottrigsbyfoundation.org” with GoDaddy.com. Unfortunately, and allegedly as a result of a glitch in GoDaddy’s billing system, Rigsby failed to pay the annual renewal fee in 2018. In these instances, typically the domain will then be free to purchase by anyone and this is exactly what happened – a third-party registered the then-available domain name and turned it into a gambling information site. Naturally this is a very frustrating situation for Rigsby.

Rigsby then decided to sue GoDaddy for violations of the Lanham Act, 15 U.S.C. § 1125(a) (which for my non-legal industry readers is the primary federal trademark statute in the United States) and various state laws and sought declaratory and injunctive relief including return of the domain name.

This legal strategy is most curious to me because they didn’t name the third-party that actually purchased the domain and actually made use of it. For those that are unaware, “use in commerce” by the would be trademark infringer is a requirement of the Lanham Act and it seems like a pretty long leap to suggest that GoDaddy was the party in this situation that made use of subject domain.

Rigsby also faced another hurdle, that is, GoDaddy has immunity under the Anticybersquatting Consumer Protection Act, 15 U.S.C. § 1125(d) (“ACPA”). The ACPA limits the secondary liability of domain name registrars and registries for the act of registering a domain name. Rigsby would be hard pressed to show that GoDaddy registered, used, or trafficked in his domain name with a bad faith intent to profit. Similarly, Rigsby would also be hard pressed to show that GoDaddy’s alleged wrongful conduct surpassed mere registration activity.

Lastly, Rigsby faced a hurdle when it comes to Section 230 of the Communications Decency Act, 47 U.S.C. § 230. I’ve written about Section 230 may times in my blogs, but in general Section 230 provides immunity to websites/platforms from claims stemming from the content created by third-parties. To be sure, there are some exceptions, including intellectual property law claims. See 47 U.S.C. § 230(e)(2) there wasn’t an act done by GoDaddy that would fairly sit square within the Lanham Act such that they would have liability. So this doesn’t apply. Additionally, 47 U.S.C. § 230(e)(3) preempts state law claims. Put another way, with a few exceptions, a platform will also avoid liability from various state law claims. As such, Section 230 would shield GoDaddy from liability for Rigsby’s state-law claims for invasion of privacy, publicity, trade libel, libel, and violations of Arizona’s Consumer Fraud Act. These are garden variety tort law claims that plaintiff’s will typically assert in these kinds of instances, however, plaintiffs have to be careful that they are directed at the right party … and it’s fairly rare that a platform is going to be the right party in these situations.

The District of Arizona dismissed all of the claims against GoDaddy and Rigsby then appealed the dismissal to the Ninth Circuit Court of Appeals. While sympathetic to the plight of Rigsby, the court correctly concluded, on February 3, 2023, that Rigsby was barking up the wrong tree in terms of who they named as a defendant and appropriately dismissed the claims against GoDaddy.

To read the court’s full opinion which goes into greater detail about the facts of this case, click on the citation below.

Citation: Rigsby v. GoDaddy, Inc., Case No. 21016182 (9th Cir. Feb. 3, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

Section 230 doesn’t protect against a UGC platform’s own unlawful conduct – Fed. Trade Comm’n v. Roomster Corp

This seems like a no-brainer to anyone who understands Section 230 of the Communications Decency Act but for some reason it still hasn’t stopped defendants from making the tried and failed argument that Section 230 protects a platform from their own unlawful conduct.

Plaintiffs: Federal Trade Commission, State of California, State of Colorado, State of Florida, State of Illinois, Commonwealth of Massachusetts, and State of New York

Defendants: Roomster Corporation, John Shriber, indivudally and officer of Roomster, and Roman Zaks, individually and as an officer of Roomster.

Roomster (roomster.com) is an internet-based (desktop and mobile app) room and roommate finder platform that purports to be an intermediary (i.e., the middle man) between individuals who are seeking rentals, sublets, and roommates. For anyone that has been around for a minute in this industry, you might be feeling like we’ve got a little bit of a Roommates.com legal situation going on here but it’s different. Roomster, like may platforms that allows third-party content also known as User Generated Content (“UGC”) platforms, does not verify listings or ensure that the listings are real or authentic and has allegedly allowed postings to go up where the address of the listing was a U.S. Post Office. Now this might seem out of the ordinary to an every day person reading this, but I can assure you, it’s nearly impossible for any UGC platform to police every listing, especially if they are a small company and have any reasonable volume of traffic and it would become increasingly hard to try and moderate as they grow. That’s just the truth of operating a UGC platform.

Notwithstanding these fake posting issues, Plaintiffs allege that Defendants have falsely represented that properties listed on the Roomster platform are real, available, and verified. [OUCH!] They further allege that Defendants have created or purchased thousands of fake positive reviews to support these representations and placed fake rental listings on the Internet to drive traffic to their platform. [DOUBLE OUCH!] If true, Roomster may be in for a ride.

The FTC has alleged that Defendants’ acts or practices violate Section 5(a) of the FTC Act, 15 U.S.C. § 45(a) (which in layman terms is the federal law against unfair methods of competition) and the states have alleged the various state versions of deceptive acts and practices. At this point, based on the alleged facts, it seems about right to me.

Roomster filed a Motion to Dismiss pursuant to Rule 12(b)(6) for Plaintiffs alleged failure to state a claim for various reasons that I won’t discuss, but you can read about in the case, but also argued that “even if Plaintiffs may bring their claims, Defendants cannot be held liable for injuries stemming from user-generated listings and reviews because … they are interactive computer service providers and so are immune from liability for inaccuracies in user-supplied content, pursuant to Section 230 of the Communications Decency Act, 47 U.S.C. § 230.” Where is the facepalm emoji when you need it? Frankly, that’s a “hail-mary” and total waste of an argument … because Section 230 does not immunize a defendant from liability from its own unlawful conduct. Indeed, a platform can be held liable for for offensive content on its service or system if it contributes to the development of what makes the content unlawful. This is also true where a platform has engaged in deceptive practices, or has had direct participation in a deceptive scheme. Fortunately, like many courts before it, the court in this case saw through the crap and rightfully denied the Motion to Dismiss on this (and other points).

I smell a settlement in the air, but only time will tell.

Case Citation: Fed. Trade Comm’n v. Roomster Corp., Case No. 22 Civ 7389 (S.D. N.Y., Feb. 1, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

Anti-SLAPP Laws Without Mandatory Award of Fees and Costs is a Hinderance to the Access to Justice and Chills Free Speech

Arizona recently passed a new anti-SLAPP law, 2022 Ariz. HB 2722 (it’s not in effect yet and won’t be for a few months at least) and while a colleague of mine and are are working on a more comprehensive discussion about anti-SLAPP and this new law specifically (which I will link here once done and/or you can always follow me here or on various social media to get the latest) as I was writing the initial draft of that article this week I became more and more frustrated. Anti-SLAPP laws without a mandatory award of attorneys fees and costs to the prevailing party of such motion is a hindrance to the access to justice for real victims of SLAPP suits and chills free speech. How? Let me elaborate.

I should preface this with the fact that I spent the better part of a decade working as in-house counsel of an interactive online forum and I’ve pretty much seen it all when it comes to true victims sharing their honest stories (and being threatened because if it) and bad actors using the Internet as a source of revenge (where people are desperate to make the harassment stop and to remove untruthful, hurtful, information from the platform). As such, my opinion is through a lens of having heard countless stories from all sides.

Generally speaking (obviously there are always outliers) those who lawfully criticize wrongdoers, especially online, do so because they don’t have the means to file suit regarding the experience that led to the criticism. Complaining online is their remedy. If those being criticized are powerful and/or wealthy, it’s really easy to say “Take that content down or I’ll sue you.” Many Americans are living paycheck to paycheck, but even if they are comfortably above that, they often cannot afford to be sued. Just look at how long it took to get through the Depp/Heard case. Granted, that was where two parties were heavily pushing back on who was right … but this is not unlike many civil cases. In fact, the behaviors exhibited in that court room and on display for the watching world to see is not all that unusual for litigating parties. The only difference there is that it was televised and people care enough about celebrity dirt to watch the case unfold on live television/online streaming.

But if you aren’t a celebrity or wealthy individual … if you cannot afford to fight back through expensive lawyers, even if you’re in the right … what do you do? Chances are you begrudgingly remove the content to save your own pocket book, or worse, lose a legal action and end up with a, albeit by default, judgment against you if you cannot, for whatever reason (and there are many reasons) don’t appear in a case. Ahh, yes … the threat of a SLAPP suit is indeed a huge and powerful sword.

But what happens if you cannot remove the content because the website’s terms of service prohibit it, or such posting has been scraped and put up elsewhere such that you do not have control over it? Oh yes, this happens all the time online. People don’t read Terms of Service and unfortunately, copy cat websites scrape content that isn’t theirs. In this instance, chances are, you will get sued anyway. Why? Because it’s worth it for the wealthy/powerful to try to get a court order to remove the content from the internet and they can’t do that without a suit. After all, many platforms will honor court orders for content removal even if they are obtained by default.

And in a lot of ways, this makes sense. Especially when bad actors/defamers hide behind anonymous accounts and/or are in foreign countries that make pursuing the perpetrator cost prohibitive or near impossible for real victims. Real victims need relief and this is one such pathway to remedy. On the other hand, for the truth tellers, it can be hard to stand up to wealthy/powerful bad actors when faced with a lawsuit. Those who speak up honestly can get the short end of the stick. If a suit is filed, and they can’t afford to defend against it, are they to be victimized yet again by default? I know it happens. I’ve seen it happen. Let me give you an example.

Imagine with me for a moment that you are a business owner of a new start-up company called Cool Business, LLC operating in Arizona, and you want to engage the services of a advertising company. Your friend, Tim, gives you the name of Great Advertising Co. based out of New York. A New York advertising company sounds fancy and you think they will probably do a far better job than anyone here in little Arizona so you reach out to them. The conversation goes great, they send you a basic contract to sign for the work to be done for Cool Business, LLC and require a $6,000.00 deposit so they can get started on the work and another $4,000.00 in 90 days for a total contract of $10,000.00 over three months. You skim the agreement, gloss over the headings of the boilerplate terms (because they’re all the same, right?), sign it and send them the $6,000.00. Everything goes great at first, but months into the relationship, and dozens of calls later, you realize that Great Advertising Co. is flakey. They aren’t delivering the services on time, there is always an excuse for why the work isn’t done, but when the 90 days hits, they still ask for their additional $4,000.00 pursuant to the contract. The business relationship at this point has soured. Great Advertising Co. demands their additional $4,000.00 under the contract, which you refuse to pay, and you instead demand a refund of your $6,000.00. Great Advertising Co. refuses to refund you the $6,000.00. Pissed off, you take your story to your favorite business attorney in Arizona and she reviews your contract and advises you that while you may have a breach of contract claim, the terms of your contract say that you agree to litigate any matters stemming from the agreement in a court in New York and that because the contract is with Cool Business, LLC that you’d have to hire a lawyer, in the state of New York, to handle the matter for you because businesses have to be represented by a lawyer in the court that you’d have to file in. Knowing that New York lawyers can be very expensive, you decide it’s not worth the hassle and to cut your losses. Understandably being upset, however, you take to the Internet to tell everyone you know how, truthfully, Great Advertising Co. ripped you off and you explain in detail what happened. You post your reviews to Google, Yelp, Facebook and any other place you can find to help spread the word about these unscrupulous business tactics and you leave it at that. Ten months later you receive a letter from a Great Advertising Co.’s New York lawyer telling you that you need technically still owe the $4,000.00 under the contract and that Great Advertising Co. doesn’t appreciate the negative reviews and demands that you immediately remove them or they will file a lawsuit against you for defamation. You ignore the letter because you know that you have a good breach of contract case and the First Amendment on your side because what you said was 100% the truth and you know, after talking to your favorite defamation attorney a few years back, you know that the truth is a defense to a claim of defamation. A day prior to the one year anniversary of your pissed off customer online posting tirade you are served with a complaint, based out of New York for defamation. You’ve watched the Johnny Depp and Amber Heard defamation trial. You saw how long that case was drug out and you know that you don’t have the funds to pay an attorney to fight for your rights in New York. You didn’t even have the funds to hire a New York attorney to bring a breach of contract case against Great Advertising Co. to try and get your $6,000.00 back. As such, feeling defeated, and without talking to your favorite defamation attorney again, you just ignore the complaint. You figure, what’s the worst that can happen. Great Advertising Co. obtains a default judgment against you individually with an order to take down the content and the judge awards $2,500.00 in damages.

Now, this entire hypothetical, while obviously facts have been changed and such, is based off a true story of what one individual experienced and how these types of situations can go south in a hurry. There are countless similar stories just like this out there. Good folks are victimized not just once, by the initial acts, but twice in some instances like in this hypothetical. But this is where good anti-SLAPP laws come into play.

Anti-SLAPP laws are designed to fight back against those who file lawsuits just to try and silence their critics, but without the promise of attorney fees and costs for the work, victims of little means are hard pressed to find lawyers willing to help (hence the hinderance to access to justice). The sad truth is that most lawyers (like most professions) cannot afford to work for free – being a professional is expensive and it’s not getting any cheaper. When anti-SLAPP laws include such fee provisions, it’s a lot easier for attorneys to consider taking on a SLAPP case, with low or no money down, case because they know they will get paid when they win. This is of course presuming it’s a deep pocket that filed the SLAPP in the first place because the reality is a judgment is only worth one’s ability to collect.

When anti-SLAPP laws fail to include such provisions, there is little deterrent to filing a SLAPP suit. Yes, if the little person being picked on has means, maybe they will think twice but that’s not often the case and the SLAPP filers know, and bank on, the litigation causing financial hardship or stress so that the truth teller will simply give in to the demands to remove the content prior to even answering the complaint, thus chilling truthful speech. It’s a powerful tactic. If it wasn’t, there wouldn’t be so many states with anti-SLAPP laws trying to curb such problems in the first place.

As many legal practitioners are painfully aware, it can be very difficult to get a judge to award attorneys fees and costs absent it being statutorily required. So even if you fight against a SLAPP suit, and win, you could still be out tens of thousands of dollars (or more depending on the case) with no guarantee of recovery. As an attorney, when you have to tell potential clients this, you can see the defeat in people’s faces before you even get going. It’s scary. What average person has tens of thousands of dollars laying around to pay to a lawyer to fight for their First Amendment right to free speech?

Would those odds make you excited about standing up for yourself? I think not. If you knew all this, would you be so willing to share with the public honest information about bad actors and you personal experience? I think not.

And this doesn’t just go for complaining consumers, but also for investigative journalists. If you think a random, but bigger company, going after an unhappy customer who got ripped off is bad and complained about it is bad … imagine what a powerful elite will try to do to an investigative journalist trying to uncover some very serious dirty laundry and expose it to the world?

Bottom line, for any anti-SLAPP law to be a true shield, among other things, it must contain, at minimum, a statutory award of attorney fees and costs.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

#firstamendment #defamation #antiSLAPP #legislation #accesstojustice

California Assembly Bill 1678 designed to protect against age discrimination gets tagged by Ninth Circuit on First Amendment grounds: IMDb.com, Inc. v. Becerra

On June 19, 2020 the Ninth Circuit Court of Appeals ruled that the content-based restrictions on speech contained within California’s Assembly Bill 1678 was facially unconstitutional because it “does not survive First Amendment scrutiny.”

I feel like if you live outside of glamorous places like California, New York and Florida, you may not be paying attention to laws being pushed by organizations like the Screen Actors Guild aka “SAG,” nevertheless … I try to keep my ear to the ground for cases that involve the First Amendment and Section 230 of the Communications Decency Act. This case happens to raise both issues, although only the First Amendment matter is addressed here.

For those that may be unfamiliar, IBDb.com is an Internet Movie Database which provides a free public website that includes information about movies, television shows, and video games. It also contains information information on actors and crew members in the industry which may contain the subject’s age or date of birth. This is an incredibly popular site, the court opinion noting that as of January of 2017 “it ranked 54th most visited website in the world.” The information on the site is generated by users (just like you and I) but IMDb does employ a “Database Content Team tasked with reviewing the community’s additions and revisions for accuracy.”

Outside of the “free” user generated section, IMDb also introduced, back in 2002, a subscription-based service called “IMDbPro” for the industry professionals (actors/crew and recruiters) to essentially act as a LinkedIn but for Hollywood – providing space for professionals to upload resume type information, headshots, etc. and casting agents could search the database for talent.

Back in 2016 apparently SAG pushed for regulation in California (which was enacted as Assembly Bill 1687) that arguably targeted IMDb, in effort to curtail alleged age discrimination in the entertainment industry. No doubt a legitimate concern (as it is in many industries) however, often good intentions result in bad law.

AB 1687 was signed into law, codified at Cal. Civ. Code § 1798.83.5 and included the following provision:

A commercial online entertainment employment service provider that enters into a contractual agreement to provide employment services to an individual for a subscription payment shall not, upon request by the subscriber, do either of the following: (1) [p]ublish or make public the subscriber’s date of birth or age information in an online profile of the subscriber [or] (2) [s]hare the subscriber’s date of birth or age information with any Internet Web sites for the purpose of publication.

Cal. Civ. Code § 1798.83.5(b)(1)-(2)

The statute also provides, in pertinent part:

A commercial online entertainment employment service provider subject to subdivision (b) shall, within five days, remove from public view in an online profile of the subscriber the subscriber’s date of birth and age information on any companion Internet Web sites under its control upon specific request by the subscriber naming the Internet Web sites.

Cal. Civ. Code § 1798.83.5(c)

The practical affect of these provisions is that it requires that subscribers of IMDbPro, be able to request that IMDb, and that IMDb, upon such request, remove the subscriber’s age or date of birth from the subscriber’s profile (which I would think they could do on their own to the extent they have control over such profile data) AND, more problematically, anywhere else on their website where such information exists regardless of who created that content. This is now extending to content the IMDbPro subscribers may not have control over as it may have been generated by third-party users of the site.

The Court opinion explained that “[b]efore AB 1687 took effect, IMDb filed a complaint under 42 U.S.C § 1983 in the Northern District of California to prevent its enforcement. IMDb alleged that AB 1687 violated both the First Amendment and Commerce Clause of the Constitution, as well as the Communications Decency Act, 47 U.S.C. § 230(f)(2).” While there was much back and forth between the parties, the crux of the debate, and crucial for the appeal was the debate over the language prohibiting IMDb’s ability to publish the age of information without regard to the source of the information.

When considering the statutory language restricting what could be posted the Court of Appeals concluded:

  • AB 1687 implemented content-based restriction on speech (i.e., dissemination of date of birth or age) that is subject to First Amendment scrutiny.
  • AB 1687 did not present a situation where reduced protection would apply (e.g., where the speech at issue is balanced against a social interest in order and morality).
    • IMDb’s content did not constitute Commercial Speech.
    • IMDb’s content did not facilitate illegal conduct.
    • IMDb’s content did not implicate privacy concerns.
  • AB 1687 does not survive strict scrutiny because it was not the least restrictive means to accomplish the goal and it wasn’t narrowly tailored.

In conclusion the Court articulated a position that I wholly agree with: “Unlawful age discrimination has no place in the entertainment industry, or any other industry. But not all statutory means of ending such discrimination are constitutional.”

Citation: IMDb.com, Inc. v. Becerra, Case Nos. 18-15463, 18-15469 (9th Cir. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Facebook’s Terms of Service set jurisdiction for litigation – We Are the People, Inc. v. Facebook, Inc.

A common mistake, and arguably a waste of time, is to attempt to bring a breach of contract litigation in a jurisdiction other than the jurisdiction that the contract states. Years ago I wrote an article about the importance of boilerplate terms. One of the very first points I discuss is choice of law/choice of forum clauses.

Most people who are entering into a contract read the contract before they sign their name. Curiously, this doesn’t seem to translate when people are signing up for a website or app. I actually wrote about this too, warning people that they are responsible for their own actions when it comes to website Terms of Service and that they should read them before they sign up. Alas, we’re all human and the only real time people tend to look at the Terms of Service (i.e., the use contract) is when the poo has hit the fan. Even then, the first thing most people look at (or should look at if they are considering litigation) is the choice of law provisions.

In this instance, Plaintiff’s brought a lawsuit against Facebook in the Southern District of New York alleging that Facebook’s removal of content from Facebook’s pages violated Facebook’s “contractual and quasi-contractual obligations to keep Plaintiffs’ content posted indefinitely.” Anyone who has ever used Facebook would likely realize that the “contract” being discussed would stem from their Terms of Service. Facebook filed a motion to dismiss based upon Section 230 of the Communications Decency Act or, alternatively, to transfer venue.

Why would Facebook want to transfer venue? Because arguably California has better law for them. California has a strong anti-SLAPP law codified at Cal. Civ. Proc. § 425.16 (which applies to many cases that Facebook is likely to be named in) and many Section 230 cases have been ruled upon favorably to platforms. As such, Facebook’s Terms of Service contains a forum selection clause that requires any disputes over the contract be heard by a court in California; more specifically, exclusively in the Northern District of California (or a state court located in San Mateo County).

As I see it, these Plaintiffs either didn’t bother to read that part of the Terms of Service or they wanted to roll the dice and see if Facebook wouldn’t notice (Pro-tip: fat chance of that working). Regardless of the rationale, on June 3, 2020 the court quickly sided with Facebook ruling that the Terms of Service forum selection clause was “plainly mandatory” absent some showing that such clause was unenforceable (which Plaintiffs failed to do and, according to the Court, could not do in this particular circumstance (given Defendants’ memorandum of law) and Facebook’s Motion to Transfer was granted.

Citation: We Are the People, Inc. v. Facebook, Inc., Case No. 19-CV-8871 (JMF) (S.D. N.Y. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

It’s hard to find caselaw to support your claims when you have none – Wilson v. Twitter

When the court’s opinion is barely over a page when printed, it’s a good sign that the underlying case had little to no merit.

This was a pro se lawsuit, filed against Twitter, because Twitter suspended at least three of Plaintiff’s accounts which were used to “insult gay, lesbian, bisexual, and transgender people for violating the company’s terms of service, specifically its rule against hateful conduct.”

Plaintiff sued Twitter alleging that “[Twitter] suspended his accounts based on his heterosexual and Christian expressions” in violation of the First Amendment, 42 U.S.C. § 1981, Title II of the Civil Rights Act of 1964, and for alleged “legal abuse.”

The court was quick to deny all of the claims explaining that:

  1. Plaintiff had no First Amendment claim against Twitter because Twitter was not a state actor; having to painfully explain that just because Twitter was a publicly traded company it doesn’t transform Twitter into a state actor.
  2. Plaintiff had no claim under § 1981 because he didn’t allege racial discrimination.
  3. Plaintiff’s Civil Rights claim failed because: (1) under Title II, only injunctive relief is available (not damages like Plaintiff wanted); (2) Section 230 of the Communications Decency Act bars his claim; and (3) because Title II does not prohibit discrimination on the basis of sex or sexual orientation (an no facts were asserted to support this claim).
  4. Plaintiff failed to allege any conduct by Twitter that cold plausibly amount to legal abuse.

The court noted that Plaintiff “expresses his difficulty in finding case law to support his claims.” Well, I guess it would be hard to find caselaw to support claims when you have no valid ones.

Citation: Wilson v. Twitter, Civil Action No. 3:20-0054 (S.D. W.Va. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Section 230, the First Amendment, and You.

Maybe you’ve heard about “Section 230” on the news, or through social media channels, or perhaps by reading a little about it through an article written by a major publication … but unfortunately, that doesn’t mean that the information that you have received is necessarily accurate. I cannot count how many times over the last year I’ve seen what seems to be purposeful misstatements of the law … which then gets repeated over and over again – perhaps to fit some sort of political agenda. After all, each side of the isle so to speak is attacking the law, but curiously for different reasons. While I absolutely despise lumping people into categories, political or otherwise, the best way I can describe the ongoing debate is that the liberals believe that there is not enough censoring going on, and the conservatives think there is too much censorship going on. Meanwhile, you have the platforms hanging out in the middle often struggling to do more, with less…

In this article I will try to explain why I believe it is important that even lay people understand Section 230 and dispel some of the most common myths that continually spread throughout the Internet as gospel … even from our own Congressional representatives.

WHY LAY PEOPLE SHOULD CARE ABOUT SECTION 230

Not everyone who reads this will remember what it was like before the Internet. If you’re not, ask your elders what it was like to be “talked at” by your local television news station or news paper. There was no real open dialog absent face to face or over the telephone communications. Your audience was limited in who you would get to share information with. Even if you wrote a “letter to the Editor” at a local newspaper it didn’t mean that your “opinion” was necessarily going to be posted. If you wanted to share a picture, you had to actually use a camera and film, take it to a developer, wait two weeks, pay for the developing and pray that your pictures didn’t suck. Can’t tell you how many blurry photographs I have in a shoe box somewhere. Then you had to mail, hand out, or show your friends in person. And don’t even get me started about a phone that was stuck to the wall and your “privacy” was limited to having a long phone chord that might stretch into the bathroom so you could shut the door. If you’re old end enough to remember that, and are nodding your head in agreement … I encourage you to spend some time remembering what that was like. It seems that us non-digital natives are at a point in life that we take the technology we have for granted; and the digital natives (meaning they were born with all of this technology) don’t really know the struggles of life without it.

If you like being able to share information freely, and to comment on information freely, you absolutely should care about what many refer to as “Section 230.” So many of my friends, family and colleagues say “I don’t understand Section 230 and I don’t care to … that’s your space” yet these are the people that I see posting content online about their business via LinkedIn or other social media platforms, sharing reviews of businesses they have been to, looking up information on Wikimedia, sharing their general opinion and/or otherwise dialog and debate over topics that are important to them, etc. In a large way, whether you know it or not, Section 230 has powered your ability to interact online in this way and has drastically shaped the Internet as we know it today.

IN GENERAL: SECTION 230 EXPLAINED

The Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA” or even “CDA 230”), in brief, is a federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Generally speaking, if the platform didn’t actually create the content, they typically aren’t liable for it. Indeed, there are a few exceptions, but for now, we’ll keep this simple. Platforms that allow interactive third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, Tripadvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. Suffice it to say, there are WAY more little guys than there are big guys, or “Big Tech” as some refer to it.

If you’re looking for some sort of a deep dive on the history of the law, I encourage you to pick up a copy of Jeff Kosseff’s book titled The Twenty-Six Words That Created The Internet. It’s a great read!

ONGOING “TECHLASH” WITH SECTION 230 IN THE CROSS-HAIRS

One would be entirely naïve to suggest that the Internet is perfect. If you ask me, it’s far from perfect. I readily concede that indeed there are harms that happen online. To be fair, harms happen offline too and they always have. Sometimes humans just suck. I’ve discussed a lot of this in my ongoing blog article series Fighting Fair on the Internet. What has been interesting to me is that many seem to want to blame people’s bad behavior on technology and to try and hold technology companies liable for what bad people do using their technology.

I look at technology as a tool. By analogy, a hammer is a tool yet we don’t hold the hammer manufacturing company or the store that sold the hammer to the consumer liable when a bad guy goes and beats someone to death with it. I imagine the counter-argument is that technology is in the best position to help stop the harms. Perhaps that may be true to a degree (and I believe many platforms do try to assist by moderating content and otherwise setting certain rules for their sites) but the question becomes, should they actually be liable? If you’re a Section 230 “purist” the answer is “No.” Why? Because Section 230 immunizes platforms from liability for the content that other people say or do on their platforms. Platforms are still liable for the content they choose to create and post or otherwise materially contribute to (but even that is getting into the weeds a little bit).

The government, however, seems to have its’ own set of ideas. We already saw an amendment to Section 230 with FOSTA (the anti-sex trafficking amendment). Unfortunately, good intentions often make for bad law, and, in my opinion, FOSTA was one of those laws which has been arguably proven to cause more harm than good. I could explain why, but I’ll save that discussion for another time.

Then, in February of 2020, the DOJ had a “workshop” on Section 230. I was fortunate enough to be in the audience in Washington, D.C. where it was held and recently wrote an article breaking down that “workshop.” If you’re interested in all the juicy details, feel free to read that article but in summary it basically was four hours’ worth of : humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.

Shortly thereafter the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 or EARN IT Act of 2019-2020 Bill was dropped which is designed to prevent the online sexual exploitation of children. While this sounds noble (FOSTA did too) when you unpack it all, and look at the bigger picture, it’s more government attempts to mess with free speech and online privacy/security in the form of yet another amendment to Section 230 under the guise of being “for the children.” I have lots of thoughts on this, but I will save this for another article another day too.

This brings us to the most recent attack on Section 230. The last two (2) weeks have been a “fun” time for those of us who care about Section 230 and its application. Remember how I mentioned above that some conservatives are of the opinion that there is too much censorship online? This often refers to the notion that social media platforms (Facebook, Twitter, and even Google) censor or otherwise block conservative speech. Setting aside whether this actually happens or not (I’ve heard arguments pointing both directions on this issue) President Trump shined a big light on this notion.

Let me first start off by saying that there is a ton of misinformation that is shared online. It doesn’t help that many people in society will quickly share things without actually reading it or conducting research to see if the content they are sharing has any validity to it but will spend 15 minutes taking a data mining quiz only to find out what kind of a potato they are. As a side note, I made that up in jest and then later found out that there is a quiz to find out what kind of potato you are. Who knew the 2006 movie Idiocracy was going to be so prophetic? Although, I can’t really say this is somehow just something that happens online? Anyone that ever survived junior high and high school knows that gossip is often riddled with misinformation and somehow we seem to forget about the silliness that happens offline too. The Internet, however, has just given the gossipers a megaphone … to the world.

Along with other perceived harmful content, platforms have been struggling with how to handle such misinformation. Some have considered adding more speech by way of notifications or “labels” as Twitter calls them, to advise their users that the information may be wholly made up or modified, shared in a deceptive manner, likely to impact public safety or otherwise cause serious harm. Best I could tell, at least as far as Twitter goes, this seems to be a relatively new effort. Other platforms like Facebook have apparently resorted to taking people’s accounts down, putting odd cover ups over photos, etc. on content they deem “unworthy” for their users. Side note: While ideal in a perfect world, I’m not personally a fan of social media platforms fact checking because: 1) it’s very hard to be an arbiter of truth; 2) it’s incredibly hard to do it at scale; 3) once you start, people will expect you to do it on every bit of content that goes out – and that’s virtually impossible; and 4) if you fail to fact check something that turns out to be false or otherwise misleading, one might assume that such content is accurate because they come to rely on the fact checking. And who checks the fact checkers? Not that my personal opinion matters, but I think this is where this bigger tech companies have created more problems for themselves (and arguably all the little sites that rely on Section 230 to operate without fear of liability).

So what kicked off the latest “Section 230 tirade”? Twitter “fact checked” President Trump in two different tweets on May 26th, 2020 by adding in a “label” to the bottom of the Tweets (which you have to click on to actually see – they don’t transfer when you embed them as I’ve done here) that said “Get the facts about mail-in-ballots.” This clearly suggests that Twitter was in disagreement with information that the President Tweeted and likely wanted its users to be aware of alternative views.

https://twitter.com/realDonaldTrump/status/1265255845358645254?s=20

To me, that doesn’t seem that bad. I can absolutely see some validity to President Trump’s concern. I can also see an alternative argument, especially since I typically mail in my voting ballot. Either way, adding content in this way, versus taking it down altogether, seems like the route that provides people more information to consider for themselves, not less. In any event, if you think about it, pretty much everything that comes out of a politician’s mouth is subjective. Nevertheless, President Trump got upset over the situation and then suggested that Twitter was “completely stifling FREE SPEECH” and then made veiled threats about not allowing that to happen.

https://twitter.com/realDonaldTrump/status/1265427539008380928?s=20

If we know anything about this President, it is that when he’s annoyed with something, he will take some sort of action. President Trump ultimately ended up signing an Executive Order on “Preventing Online Censorship” a mere two (2) days later. For those that are interested, while certainly left leaning, and non-favorable to our commander in chief, Santa Clara Law Professor Eric Goldman provided a great legal analysis of the Executive Order, calling it “political theater.” Even if you align yourself with the “conservative” base, I would encourage you to set aside the Professor’s personal opinions (we all have opinions) and focus on the meat of the legal argument. It’s good.

Of course, and as expected, the Internet looses its mind and all the legal scholars and practitioners come out of the woodwork, commenting on Section 230 and the newly signed Executive Order, myself included. The day after of the Executive Order was signed (and likely President Trump read all the criticisms) he Tweeted out “REVOKE 230!”

https://twitter.com/realDonaldTrump/status/1266387743996870656?s=20

So this is where I have to sigh heavily. Indeed there is irony in the fact that the President is calling for the revocation of the very same law that allowed innovation and Twitter to even become a “thing” and which also makes it possible for him to reach out and connect to millions of people, in real time, in a pretty much unfiltered way as we’ve seen, for free because he has the application loaded on his smart phone. In my opinion, but for Section 230, it is entirely possible Twitter, Facebook and all the other forms of social media and interactive user sites would not exist today; at least not as we know it. Additionally, I find it ironic that President Trump is making free speech arguments when he’s commenting about, and on, a private platform. For those of you that slept through high school civics, the First Amendment doesn’t apply to private companies … more about that later.

As I said though, this attack on Section 230 isn’t just stemming from the conservative side. Even Joe Biden has suggested that Section 230 should be “repealed immediately” but he’s on the whole social media companies censor too little train which is completely opposite of the reasons that people like President Trump wants it revoked.

HOW VERY AMERICAN OF US

How many times have you heard that American’s are self-centered jerks? Well, Americans do love their Constitutional rights, especially when it comes to falling in love with their own opinions and the freedom to share those opinions. Moreover, when it comes to the whole content moderation and First Amendment debate, we often look at tech giants as purely American companies. True, these companies did develop here (arguably in large part thanks to Section 230) however, what many people fail to consider is that many of these platforms operate globally. As such, they are often trying to balance the rules and regulations of the U.S. with the rules and regulations of competing global interests.

As stated, Americans are very proud of the rights granted to them, including the First Amendment right to free speech (although after reading some opinions lately I’m beginning to wonder if half the population slept through or otherwise skipped high school civics class … or worse, slept through Constitutional Law while in law school). However, not all societies have this speech right. In fact, Europe’s laws value the privacy as a right, over the freedom of expression. A prime example of this playing out is Europe’s Right to Be Forgotten law. If you aren’t familiar, under this EU law, citizens can ask that even truthful information, but perhaps older, be taken down from the Internet (or in some cases not be indexed on EU search engines) or else the company hosting that information can face penalty.

When we demand that these tech giants cater to us, here in the United States, we are forgetting that these companies have other rules and regulations that they have to take into consideration when trying to set and implement standards for their users. What is good for us here in the U.S. may not be good for the rest of the world, which are also their customers.

SECTION 230 AND FIRST AMENDMENT MYTHS SPREAD LIKE WILDFIRE

What has been most frustrating to me, as someone who practices law in this area and has a lot of knowledge when it comes to the business of operating platforms, content moderation, and the applicability of Section 230, is how many people who should know better get it wrong. I’m talking about our President, Congressional representatives, and media outlets … so many of them, getting it wrong. And what happens from there? You get other people who regurgitate the same uneducated or otherwise purposefully misstatements in articles that get shared which further perpetuates the ignorance of the law and how things actually work.

For example, just today (June 8, 2020) Jeff Kosseff Tweeted out a thread that describes a history of the New York Times failing to accurately explain Section 230 in various articles and how one of these articles ended up being quoted by a NJ federal judge. It’s a good thread. You should read it.

MYTH: A SITE IS EITHER A “PLATFORM” OR A “PUBLISHER”

Contrary to so many people I’ve listened to speak, or articles that I’ve read, when it comes to online UGC platforms, there is no distinction between “publisher” and a “platform.”  You aren’t comparing the New York Times to Twitter.  Working for a newspaper is not like working for a UGC platform.  Those are entirely different business models … apples and oranges. Unfortunately, this is another spot where many people get caught up and confused. 

UGC platforms are not in the business of creating content themselves but rather in the business of setting their own rules and allowing third-parties (i.e., you and I here on this platform) to post content in accordance with those rules.  Even for those who point to some publications erring on the side of caution on 2006-2008 re editing UGC comments doesn’t mean that’s how the law actually was interpreted.  We have decades worth of jurisprudence interpreting Section 230 (which is what the judicial branch does – interprets the law, not the FCC which is an independent organization overseen by Congress). UPDATE 1/5/2021 – although now there is debate on whether or not they can and as of October 21, 2020 the FCC seems to think they do have such right to interpret it.  Platforms absolutely have the right to moderate the content which they did not create and kick people off of their platform for violation of their rules. 

Think if it this way – have you ever heard your parents say (or maybe you’ve said this to your own kids) “My house, my rules.  If you don’t like the rules, get your own house.”  If anyone actually researches the history, that’s why Section 230 was created … to remove the moderator’s dilemma.  A platform’s choice of what to allow, or disallow, has no bearing (for the sake of this argument here) on the applicability of Section 230.  Arguably, UGC platforms also have a First Amendment right to choose what they want to publish, or not publish. So even without Section 230, they could still get rid of content they didn’t deem appropriate for their users/mission/business model.

MYTH: PLATFORMS HAVE TO BE NEUTRAL FOR SECTION 230 TO APPLY

Contrary to the misinformation being spewed all over (including by government representatives – which I find disappointing) Section 230 has never had a “neutrality” caveat for protection.  Moreover, in the context of the issue of political speech, Senator Ron Wyden, who was a co-author for the law even stated recently on Twitter “let me make this clear: there is nothing in the law about political neutrality.” 

You can’t get much closer to understanding Congressional intent of the law than getting words directly from the co-author of the law. 

Quite frankly, there is no such thing as a “neutral platform.” That’s like saying a cheeseburger is a cheeseburger is a cheeseburger. Respectfully, some cheeseburgers from certain restaurants are just way better than others. Moreover, if we limited content on platforms to only what is lawful, i.e., a common carrier approach where the platforms would be forced to treat all legal content equally and refrain from discrimination, as someone that deals with content escalations for platforms, I can tell you that we would have a very UGLY Internet because sometimes people just suck or their idea of a good time and funny isn’t exactly age appropriate for all views/users.

MYTH: CENSORSHIP OF SPEECH BY A PLATFORM VIOLATES THE FIRST AMENDMENT

The First Amendment absolutely protects the freedom of speech.  In theory, you are free to put on a sandwich board that says (insert whatever you take issue with) and walk up and down the street if you want.  In fact, we’re seeing such constitutionally protected demonstrations currently with the protesters all over the country in connection to the death of George Floyd. Peaceful demonstration (and yes, I agree, not all of that was “peaceful”) is absolutely protected under the First Amendment. 

What the First Amendment does not do (and this seems to get lost on people for some reason) is give one the right to amplification of that speech on a private platform.  One might wish that were the case, but wishful thinking does equal law. Unless and until there is some law, that passes judicial scrutiny, which deems these private platforms a public square subject to the same restrictions that is imposed on the government, they absolutely do not have to let you say everything and anything you want. Chances are, this is also explained in their Terms of Service, which you probably didn’t read, but you should.

If you’re going to listen to anyone provide an opinion on Section 230, perhaps one would want to listen to a co-author of the law itself:

Think of it this way, if you are a bar owner and you have a drunk and disorderly guy in you bar that is clearly annoying your other customers, would you want the ability to 86 the person or do you want the government to tell you that as long as you are open to the public you have to let that person stay in your bar even if you risk losing other customers because someone is being obnoxious? Of course you want to be able to bounce that person out! It’s not really any different for platform operators.

So for all of you chanting about how a platforms censorship of your speech on their platform is impacting your freedom of speech – you don’t understand the plain language of the First Amendment. The law is “Congress shall make no law … abridging the freedom of speech…” not “any person or entity shall make no rule abridging the freedom of speech…”, which is what people seem to think the First Amendment says or otherwise wants the law to say.

LET’S KEEP THE CONVERSATION GOING BUT NOT MAKE RASH DECISIONS

Do platforms have the best of both worlds … perhaps.  But what is worse?  The way it is now with Section 230 or what it would be like without Section 230?  Frankly, I choose a world with Section 230.  Without Section 230, the Internet as we know it will change. 

While we’ve never seen what the Internet looks like without Section 230 I can imagine we would go to one of two options: 1) an Internet where platforms are afraid to moderate content and therefore everything and anything would go up, leaving us with a very ugly Internet (because people are unfathomably rude and disgusting – I mean, content moderators have suffered from PTSD for having to look at what nasty humans try to share); or 2) an Internet where platforms are afraid of liability and either UGC sites will cease to exist altogether or they may go to a notice and take down model where as soon a someone sees something they are offended by or otherwise don’t like, they will tell the platform the information is false, defamatory, harassing, etc. and that content would likely automatically come down. The Internet, and public discussion, will be at the whim of a heckler’s veto. You think speech is curtailed now? Just wait until the society of “everyone is offended” gets a hold of it.

As I mentioned to begin with, I don’t think that the Internet is perfect, but neither are humans and neither is life. While I believe there may be some concessions to be had, after in-depth studies and research (after all, we’ve only got some 24 years of data to work with and those first years really don’t count in my book) I think it foolish to be making rash decisions based upon political agendas. If the politicians want their own platform where they aren’t going to be “censored” and the people have ease of access to such information … create one! If people don’t like that platforms like Twitter, Facebook, or Google are censoring content … don’t use them or use them less. Spend your time and money with a platform that more aligns with your desires and beliefs. There isn’t one out there? Well, nothing is stopping you from creating your own version (albeit, I understand that it’s easier said than done … but there are platforms out there trying to make that move). That’s what is great about this country … we have the ability to innovate … we have options … well, at least for now.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.