The Temptation of ChatGPT for Legal Contracts: Why Human Expertise Reigns Supreme

Disclaimer: This article, while reviewed and slightly edited by a real live human prior to publication, was initially drafted by ChatGPT. Even ChatGPT knows its own limitations.

In this digital age, where technology continues to advance at a rapid pace, it’s no surprise that businesses and individuals seek innovative solutions for various tasks, including legal contract creation. With the rise of AI-powered language models like ChatGPT, one might be tempted to rely on them for generating legal contracts quickly and conveniently. However, while ChatGPT and similar tools offer impressive capabilities, there are significant reasons why they fall short when it comes to formal legal contract creation.

Understanding the Temptation

ChatGPT, with its ability to generate coherent and contextually relevant text, can be alluring for those seeking a quick solution for legal contract drafting. The convenience of inputting prompts and receiving instant responses may seem enticing, especially for individuals who are not well-versed in legal language or lack the resources for professional legal assistance. The prospect of saving time and money might make ChatGPT an appealing choice at first glance.

The Limitations of ChatGPT

  1. Lack of Contextual Understanding: While ChatGPT excels in understanding and generating text based on provided prompts, it lacks the ability to truly comprehend the nuances of legal contracts and their specific legal implications. It lacks a deep understanding of legal concepts, precedents, and regulations that are crucial for creating enforceable and comprehensive contracts.
  2. Legal Accuracy and Updates: Legal landscapes are dynamic, with laws, regulations, and court rulings subject to change. ChatGPT’s training data might not encompass the most up-to-date legal information, potentially leading to inaccuracies or outdated clauses in generated contracts. Attorneys stay abreast of legal developments and ensure that contracts align with current laws and regulations.
  3. Tailored and Specific Legal Advice: Legal contracts require a personalized touch to address the unique needs and circumstances of each client. ChatGPT, while proficient in generating text, cannot provide the tailored legal advice and expertise that an attorney can offer. Attorneys can carefully analyze a client’s situation, identify potential risks, and customize contracts accordingly.
  4. Complex Legal Language: Legal contracts often utilize specialized terminology and language that carry precise legal meanings. ChatGPT may not fully grasp the intricate nuances and subtleties of legal language, potentially resulting in ambiguous or poorly drafted provisions that could be exploited or lead to disputes.
  5. Confidentiality and Security: Legal contracts often involve sensitive and confidential information. Sharing such information with a third-party AI model might raise concerns regarding data privacy and security. Working with a trusted attorney ensures the confidentiality and protection of sensitive information.

The Indispensable Role of Human Expertise

While technology can undoubtedly enhance various aspects of our lives, legal contract creation necessitates the expertise, experience, and ethical judgment that only human attorneys can provide. Attorneys possess the legal knowledge, contextual understanding, and analytical skills required to create contracts that mitigate risks, protect client interests, and ensure compliance with applicable laws.

By engaging an attorney for legal contract creation, businesses and individuals can benefit from:

  1. Tailored Advice: Attorneys can assess unique circumstances, identify potential risks, and provide advice tailored to specific needs, ensuring contracts align with individual goals and requirements.
  2. Legal Compliance: Attorneys stay updated on legal changes and ensure that contracts adhere to current laws and regulations, reducing the risk of non-compliance and legal disputes.
  3. Clarity and Precision: Attorneys are skilled in crafting precise and unambiguous contract language, minimizing the potential for misinterpretation and reducing the likelihood of future disagreements.
  4. Risk Mitigation: Attorneys understand the potential risks associated with different types of contracts and can draft provisions that protect clients from liabilities and unforeseen circumstances.
  5. Dispute Resolution: In the unfortunate event of a contract dispute, attorneys provide legal representation and guidance, leveraging their expertise to achieve favorable outcomes through negotiation, mediation, or litigation.

While ChatGPT and similar AI language models have their merits, they cannot replace the indispensable role of human attorneys in the creation of formal legal contracts. The complexities, legal nuances, and individual circumstances involved in contract drafting necessitate the knowledge, experience, and personalized advice that only human legal professionals can provide. By seeking the guidance of an attorney, individuals and businesses can ensure the creation of enforceable, comprehensive, and customized contracts that protect their interests and mitigate legal risks.

[EDITOR NOTE: Look, we get it. Everyone wants a faster way to prepare content. Lawyers are no exception. That said, it’s important to understand the difference between when ChatGPT can be a useful tool, and when it’s best to have a set of trained legal eyes looking at and thinking about something. Legal documents are not the area where you want to cut corners. If you do, there is a good chance that you will be paying a lawyer to deal with a ChatGPT mess up … because, you know, ChatGPT makes up fake law and stuff. Don’t bet the company on ChatGPT … at least not yet.]

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

SCOTUS declines to rule on Section 230, again. – Gonzalez v. Google

The widely industry watched nail biter of a case, Gonzalez v. Google, has been ruled upon by the Supreme Court of the United States. Many advocates of Section 230 thought for sure that SCOTUS would ruin the application of Section 230 as we know it, however, that didn’t happen. Much to the dismay of many critics of Section 230, SCOTUS (and rightfully so under the facts of this case in my opinion) kicked the can on the issue of Section 230 and declined to address the question.

CASE SUMMARY:

In this case, the parents and brothers of Nohemi Gonzalez, a U.S. citizen killed in the 2015 coordinated terrorist attacks in Paris, sued Google, LLC under 18 U.S.C. ยงยง2333(a) and (d)(2). They alleged that Google was directly and secondarily liable for the attack that killed Gonzalez. The secondary-liability claims were based on the assertion that Google aided and abetted and conspired with ISIS through the use of YouTube, which Google owns and operates.

The District Court dismissed the complaint for failure to state a claim but allowed the plaintiffs to amend their complaint. However, the plaintiffs chose to appeal without amending the complaint. The Ninth Circuit affirmed the dismissal of most claims, citing Section 230 of the Communications Decency Act, but allowed the claims related to Google’s approval of ISIS videos for advertisements and revenue sharing through YouTube to proceed.

The Supreme Court granted certiorari to review the Ninth Circuit’s application of Section 230. However, since the plaintiffs did not challenge the rulings on their revenue-sharing claims, and in light of the Supreme Court’s decision in Twitter, Inc. v. Taamneh, the Court found that the complaint failed to state a viable claim for relief. The Court acknowledged that the complaint appeared to fail under the standards set by Twitter and the Ninth Circuit’s unchallenged holdings. Therefore, the Court vacated the judgment and remanded the case to the Ninth Circuit for reconsideration in light of the Supreme Court’s decision in Twitter. [Author Note: If you listen to the oral argument, you’d see just how weak of a case was brought by Plaintiff].

In summary, the Supreme Court did not address the viability of the plaintiffs’ claims but indicated that the complaint seemed to fail to state a plausible claim for relief, and therefore, declined to address the application of Section 230 in this case. The case was remanded to the Ninth Circuit for further consideration.

DISCLAIMER & OTHER POINTS:

I’m currently sitting at the Tenth Annual Conference on Governance of Emerging Technology and Science. There is a lot of talk about AI, including ChatGPT. Because the Gonzalez opinion was so incredibly short by comparison, I thought I would test out ChatGPT’s ability to summarize this case. Having followed this case, and read the SCOTUS opinion myself, I was quite surprised with summary that it spit out, which is what you just read above. For those that want to read the case opinion for yourself (it’s only three pages) you can review the SCOTUS opinion linked to below. I’ve also included the link to the Twitter case as well (which is a more typical 38 page opinion). In case you are curious, I also asked ChatGPT to summarize the Twitter case, however, there is some sort of character limit as I received an error message about the request being too long. We’re all learning.

Citation: Gonzalez v. Google, 598 U.S. ___ (May 18, 2023)

Citation: Twitter v. Taamneh, 598 U.S. ___ (May 18, 2023)

DISCLAIMER: This is for general information purposes only. This should not be relied upon as formal legal advice. If you have a legal matter that you are concerned with, you should seek out an attorney in your jurisdiction who may be able to advise you of your rights and options.

NY District Court Swings a Bat at “The Hateful Conduct Law” – Volokh v. James

This February14th (2023), Valentine’s Day, the NY Federal District Court showed no love for New York’s Hateful Conduct Law when it granted a preliminary injunction to halt it. So this is, to me, an exceptionally fun case because it includes not only the First Amendment (to the United States Constitution) but also Section 230 of the Communications Decency Act, 47 U.S.C. ยง 230. I’m also intrigued because renowned Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc. are the Plaintiffs. If Professor Volokh is involved, it’s likely to be an interesting argument. The information about the case below has been pulled from the Court Opinion and various linked websites.

Plaintiffs: Eugene Volokh, Locals Technology, Inc., and Rumble Canada, Inc.

Defendant: Letitia James, in her official capacity as New York Attorney General

Case No.: 22-cv-10195 (ALC)

The Honorable Andrew L. Carter, Jr. started the opinion with the following powerful quote:

ย โ€œSpeech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express โ€˜the thought that we hate.’โ€

Matal v. Tam, 137 S.Ct. 1744, 1764 (2017)ย 

Before we get into what happened, it’s worth taking a moment to explain who the Plaintiffs in the case are. Eugene Volokh (“Volokh”) is a renowned First Amendment law professor at UCLA. In addition, Volokh is the co-owner and operator of the popular legal blog known as the Volokh Conspiracy. Rumble, operates a website similar to YouTube which allows third-party independent creators to upload and share video content. Rumble sets itself apart from other similar platforms because it has a “free speech purpose” and it’s “mission [is] ‘to protect a free and open internet’ and to ‘create technologies that are immune to cancel culture.” Locals Technology, Inc. (“Locals”) is a subsidiary of Rumble and also operates a website that allows third party-content to be shared among paid, and unpaid, subscribers. Similar to Rumble, Locals also reports having a “pro-fee speech purpose” and a “mission of being ‘committed to fostering a community that is safe, respectful, and dedicated to the free exchange of ideas.” Suffice it to say, the Plaintiffs are no stranger to the First Amendment or Section 230. So how did these parties become Plaintiffs? New York tried to pass a well intentioned, but arguably unconstitutional, law that could very well negatively impact them.

On May 14th last year, 2022, some random racist nut job used Twitch (a social media site) to livestream himself carrying out a mass shooting on shoppers at a grocery store in Buffalo, New York. This disgusting act of violence left 10 people dead and three people wounded. As with most atrocities, and with what I call the “train wreck effect”, this video went viral on various other social media platforms. In response to the atrocity New York’s Governor Kathy Hochul kicked the matter over to the Attorney General’s Office for investigation with an apparent instruction to focus on “the specific online platforms that were used to broadcast and amplify the acts and intentions of the mass shooting” and directed the Attorney General’s Office to “investigate various online platforms for ‘civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” Apparently the Governor hasn’t heard about Section 230, but I’ll get to that in a minute. After investigation, the Attorney General’s Office released a report, and later a press release, that stated “[o]nline platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms” because an alleged “lack of oversight, transparency, and accountability of these platforms allows hateful and extremist views to proliferate online.” This is where one, having any knowledge about this area of law, should insert the facepalm emoji. If you aren’t familiar with this area of law, this will help explain (a little – we’re trying to keep this from being a dissertation).

Now no reasonable person will disagree that this event was tragic and disgusting. Humans are weird beings and for whatever reason (though I suspect a deep dive into psychology would provide some insight), we cannot look away from a train wreck. We’re drawn to it like a moth to a flame. Just look at any news organization and what is shared. You can’t tell me that’s not filled with “train wreck” information. Don Henley said it best in his lyrics in the 1982 song Dirty Laundry, talking about the news: “she can tell you about the plane crash with a gleam in her eye” … “it’s interesting when people die, give us dirty laundry”. A Google search for the song lyrics will give you full context if you’re not a Don Henley fan … but even 40 plus years later, this is still a truth.

In effort to combat the perceived harms from the atrocity that went viral, New York, on December 3, 2022 enacted The Hateful Conduct Law, entitled “Social media networks; hateful conduct prohibited.” What in the world does that mean? Well, the law applies to “social medial networks” and defined “hateful conduct” as: “[T]he use of a social media network to vilify, humiliate, incite violence against a group or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” N.Y. Gen. Bus. Law ยง 394-ccc(1)(a). Okay, but still ..

In explaining The Hateful Conduct Law, and as the Court’s opinion (with citations omitted) explains:

[T]he Hateful Conduct Law requires that social media networks create a complaint mechanism for three types of โ€œconductโ€: (1) conduct that vilifies; (2) conduct that humiliates; and (3) conduct that incites violence. This โ€œconductโ€ falls within the law’s definition if it is aimed at an individual or group based on their โ€œraceโ€, โ€œcolorโ€, โ€œreligionโ€, โ€œethnicityโ€, โ€œnational originโ€, โ€œdisabilityโ€, โ€œsexโ€, โ€œsexualโ€ orientationโ€, โ€œgender identityโ€ or โ€œgender expressionโ€.

The Hateful Conduct Law has two main requirements: (1) a mechanism for social media users to file complaints about instances of โ€œhateful conductโ€ and (2) disclosure of the social media network’s policy for how it will respond to any such complaints. First, the law requires a social media network to โ€œprovide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.โ€ This mechanism must โ€œbe clearly accessible to users of such network and easily accessed from both a social media networks’ application and website. . . .โ€ and must โ€œallow the social media network to provide a direct response to any individual reporting hateful conduct informing them of how the matter is being handled.โ€ N.Y. Gen. Bus. Law ยง 394-ccc(2).

Second, a social media network must โ€œhave a clear and concise policy readily available and accessible on their website and application. . . โ€ N.Y. Gen. Bus. Law ยง 394-ccc(3). This policy must โ€œinclude how such social media network will respond and address the reports of incidents of hateful conduct on their platform.โ€ N.Y. Gen. Bus. Law ยง 394-ccc(3).

The law also empowers the Attorney General to investigate violations of the law and provides for civil penalties for social media networks which โ€œknowingly fail to complyโ€ with the requirements. N.Y. Gen. Bus. Law ยง 394-ccc(5).

Naturally this raised a lot of questions. How far reaching is this law? Who and what counts as a “social media network”? What persons or entities would be impacted? Who decides what is “hateful conduct”? Does the government have the authority to try and regulate speech in this way?

Two days before the law was to go into effect, on December 1, 2022, the instant action was commenced by the Plaintiffs alleging both facially, and as-applied, challenges to The Hateful Conduct Law. Plaintiffs argued that the law “violates the First Amendment because it: (1) is a content viewpoint-based regulation of speech; (2) is overbroad; and (3) is void for vagueness. Plaintiffs also alleged that the law is preempted by” Section 230 of the Communications Decency Act.

For the full discussion and analysis on the First Amendment arguments, it’s best to review the full opinion, however, the Court’s opinion opened with the following summary of its position (about the First Amendment as applied to the law):

“With the well-intentioned goal of providing the public with clear policies and mechanisms to facilitate reporting hate speech on social media, the New York State legislature enacted N.Y. Gen. Bus. Law ยง 394-ccc (โ€œthe Hateful Conduct Lawโ€ or โ€œthe lawโ€). Yet, the First Amendment protects from state regulation speech that may be deemed โ€œhatefulโ€ and generally disfavors regulation of speech based on its content unless it is narrowly tailored to serve a compelling governmental interest. The Hateful Conduct Law both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users, without articulating a compelling governmental interest or ensuring that the law is narrowly tailored to that goal.”

With respect to the preemption argument made by Plaintiffs, that is that Section 230 of the Communications Decency Act preempts the law because it imposes liability on websites by treating them as publishers. As the Court outlines (some citations to cases omitted):

The Communications Decency Act provides that โ€œ[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.โ€ 47 U.S.C. ยง 230(c)(1). The Act has an express preemption provision which states that โ€œ[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.โ€ 47 U.S.C. ยง 230(e)(3).

As compared to the section of the Opinion regarding the First Amendment, the Court gives very little analysis on the Section 230 preemption claim beyond making the following statements:

“A plain reading of the Hateful Conduct Law shows that Plaintiffs’ argument is without merit. The law imposes liability on social media networks for failing to provide a mechanism for users to complain of โ€œhateful conductโ€ and for failure to disclose their policy on how they will respond to complaints. N.Y. Gen. Bus. Law ยง 394-ccc(5). The law does not impose liability on social media networks for failing to respond to an incident of โ€œhateful conductโ€, nor does it impose liability on the network for its users own โ€œhateful conductโ€. The law does not even require that social media networks remove instances of โ€œhateful conductโ€ from their websites. Therefore, the Hateful Conduct Law does not impose liability on Plaintiffs as publishers in contravention of the Communications Decency Act.” (emphasis added)

Hold up sparkles. So the Court recognizes the fact that platforms cannot be held liable (in these instances anyway) for third-party content, no matter how ugly that content might be, but yet wants to force (punish in my opinion) a platform by forcing them to spend big money on development to create all these content reporting mechanisms, and set transparency policies, for content that they actually have no legal requirement to remove? How does this law make sense in the first place? What is the point (besides trying to trap them into having a policy that if they don’t follow could give rise to an action for unfair or deceptive advertising)? This doesn’t encourage moderation. In fact, I’d argue that it does the opposite and encourages a website to say “we don’t do anything about speech that someone claims to be harmful because we don’t want liability for failing to do so if we miss something.” In my mind, this is a punishment, based upon third-party content. You don’t need a “reporting mechanism” for content that people aren’t likely to find offensive (like cute cat videos). To this end, I can see why Plaintiffs raised a Section 230 preemption argument … because if you drill it down, the law is still trying to force websites to take an action to deal with undesirable third-party content (and then punish them if they don’t follow whatever their policy is). In my view, it’s an attempt to do an end run around Section 230. The root issue is still undesirable third-party content. Consequently, I’m not sure I agree with the Court’s position here. I don’t think the court drilled down enough to the root of the issue.

Either way, the Court did, as explained in the beginning, grant Plaintiff’s Motion for Preliminary Injunction (based upon the First Amendment arguments) which, at current, prohibits New York from trying to enforce the law.

Citation: Volokh v. James, Case No. 22-cv-10195 (ALC) (S.D.N.Y., Feb. 14, 2023)

DISCLOSURE: This is not mean to be legal advice nor should it be relied upon as such.

Section 230 doesn’t protect against a UGC platform’s own unlawful conduct – Fed. Trade Comm’n v. Roomster Corp

This seems like a no-brainer to anyone who understands Section 230 of the Communications Decency Act but for some reason it still hasn’t stopped defendants from making the tried and failed argument that Section 230 protects a platform from their own unlawful conduct.

Plaintiffs: Federal Trade Commission, State of California, State of Colorado, State of Florida, State of Illinois, Commonwealth of Massachusetts, and State of New York

Defendants: Roomster Corporation, John Shriber, indivudally and officer of Roomster, and Roman Zaks, individually and as an officer of Roomster.

Roomster (roomster.com) is an internet-based (desktop and mobile app) room and roommate finder platform that purports to be an intermediary (i.e., the middle man) between individuals who are seeking rentals, sublets, and roommates. For anyone that has been around for a minute in this industry, you might be feeling like we’ve got a little bit of a Roommates.com legal situation going on here but it’s different. Roomster, like may platforms that allows third-party content also known as User Generated Content (“UGC”) platforms, does not verify listings or ensure that the listings are real or authentic and has allegedly allowed postings to go up where the address of the listing was a U.S. Post Office. Now this might seem out of the ordinary to an every day person reading this, but I can assure you, it’s nearly impossible for any UGC platform to police every listing, especially if they are a small company and have any reasonable volume of traffic and it would become increasingly hard to try and moderate as they grow. That’s just the truth of operating a UGC platform.

Notwithstanding these fake posting issues, Plaintiffs allege that Defendants have falsely represented that properties listed on the Roomster platform are real, available, and verified. [OUCH!] They further allege that Defendants have created or purchased thousands of fake positive reviews to support these representations and placed fake rental listings on the Internet to drive traffic to their platform. [DOUBLE OUCH!] If true, Roomster may be in for a ride.

The FTC has alleged that Defendants’ acts or practices violate Section 5(a) of the FTC Act, 15 U.S.C. ยง 45(a) (which in layman terms is the federal law against unfair methods of competition) and the states have alleged the various state versions of deceptive acts and practices. At this point, based on the alleged facts, it seems about right to me.

Roomster filed a Motion to Dismiss pursuant to Rule 12(b)(6) for Plaintiffs alleged failure to state a claim for various reasons that I won’t discuss, but you can read about in the case, but also argued that “even if Plaintiffs may bring their claims, Defendants cannot be held liable for injuries stemming from user-generated listings and reviews because … they are interactive computer service providers and so are immune from liability for inaccuracies in user-supplied content, pursuant to Section 230 of the Communications Decency Act, 47 U.S.C. ยง 230.” Where is the facepalm emoji when you need it? Frankly, that’s a “hail-mary” and total waste of an argument … because Section 230 does not immunize a defendant from liability from its own unlawful conduct. Indeed, a platform can be held liable for for offensive content on its service or system if it contributes to the development of what makes the content unlawful. This is also true where a platform has engaged in deceptive practices, or has had direct participation in a deceptive scheme. Fortunately, like many courts before it, the court in this case saw through the crap and rightfully denied the Motion to Dismiss on this (and other points).

I smell a settlement in the air, but only time will tell.

Case Citation: Fed. Trade Comm’n v. Roomster Corp., Case No. 22 Civ 7389 (S.D. N.Y., Feb. 1, 2023)

DISCLAIMER: This is for general information only. None of this is meant to be legal advice nor should it be relied upon as such.

California Assembly Bill 1678 designed to protect against age discrimination gets tagged by Ninth Circuit on First Amendment grounds: IMDb.com, Inc. v. Becerra

On June 19, 2020 the Ninth Circuit Court of Appeals ruled that the content-based restrictions on speech contained within California’s Assembly Bill 1678 was facially unconstitutional because it “does not survive First Amendment scrutiny.”

I feel like if you live outside of glamorous places like California, New York and Florida, you may not be paying attention to laws being pushed by organizations like the Screen Actors Guild aka “SAG,” nevertheless … I try to keep my ear to the ground for cases that involve the First Amendment and Section 230 of the Communications Decency Act. This case happens to raise both issues, although only the First Amendment matter is addressed here.

For those that may be unfamiliar, IBDb.com is an Internet Movie Database which provides a free public website that includes information about movies, television shows, and video games. It also contains information information on actors and crew members in the industry which may contain the subject’s age or date of birth. This is an incredibly popular site, the court opinion noting that as of January of 2017 “it ranked 54th most visited website in the world.” The information on the site is generated by users (just like you and I) but IMDb does employ a “Database Content Team tasked with reviewing the community’s additions and revisions for accuracy.”

Outside of the “free” user generated section, IMDb also introduced, back in 2002, a subscription-based service called “IMDbPro” for the industry professionals (actors/crew and recruiters) to essentially act as a LinkedIn but for Hollywood – providing space for professionals to upload resume type information, headshots, etc. and casting agents could search the database for talent.

Back in 2016 apparently SAG pushed for regulation in California (which was enacted as Assembly Bill 1687) that arguably targeted IMDb, in effort to curtail alleged age discrimination in the entertainment industry. No doubt a legitimate concern (as it is in many industries) however, often good intentions result in bad law.

AB 1687 was signed into law, codified at Cal. Civ. Code ยง 1798.83.5 and included the following provision:

A commercial online entertainment employment service provider that enters into a contractual agreement to provide employment services to an individual for a subscription payment shall not, upon request by the subscriber, do either of the following: (1) [p]ublish or make public the subscriber’s date of birth or age information in an online profile of the subscriber [or] (2) [s]hare the subscriber’s date of birth or age information with any Internet Web sites for the purpose of publication.

Cal. Civ. Code ยง 1798.83.5(b)(1)-(2)

The statute also provides, in pertinent part:

A commercial online entertainment employment service provider subject to subdivision (b) shall, within five days, remove from public view in an online profile of the subscriber the subscriberโ€™s date of birth and age information on any companion Internet Web sites under its control upon specific request by the subscriber naming the Internet Web sites.

Cal. Civ. Code ยง 1798.83.5(c)

The practical affect of these provisions is that it requires that subscribers of IMDbPro, be able to request that IMDb, and that IMDb, upon such request, remove the subscriber’s age or date of birth from the subscriber’s profile (which I would think they could do on their own to the extent they have control over such profile data) AND, more problematically, anywhere else on their website where such information exists regardless of who created that content. This is now extending to content the IMDbPro subscribers may not have control over as it may have been generated by third-party users of the site.

The Court opinion explained that “[b]efore AB 1687 took effect, IMDb filed a complaint under 42 U.S.C ยง 1983 in the Northern District of California to prevent its enforcement. IMDb alleged that AB 1687 violated both the First Amendment and Commerce Clause of the Constitution, as well as the Communications Decency Act, 47 U.S.C. ยง 230(f)(2).” While there was much back and forth between the parties, the crux of the debate, and crucial for the appeal was the debate over the language prohibiting IMDb’s ability to publish the age of information without regard to the source of the information.

When considering the statutory language restricting what could be posted the Court of Appeals concluded:

  • AB 1687 implemented content-based restriction on speech (i.e., dissemination of date of birth or age) that is subject to First Amendment scrutiny.
  • AB 1687 did not present a situation where reduced protection would apply (e.g., where the speech at issue is balanced against a social interest in order and morality).
    • IMDb’s content did not constitute Commercial Speech.
    • IMDb’s content did not facilitate illegal conduct.
    • IMDb’s content did not implicate privacy concerns.
  • AB 1687 does not survive strict scrutiny because it was not the least restrictive means to accomplish the goal and it wasn’t narrowly tailored.

In conclusion the Court articulated a position that I wholly agree with: “Unlawful age discrimination has no place in the entertainment industry, or any other industry. But not all statutory means of ending such discrimination are constitutional.”

Citation: IMDb.com, Inc. v. Becerra, Case Nos. 18-15463, 18-15469 (9th Cir. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Section 230, the First Amendment, and You.

Maybe you’ve heard about “Section 230” on the news, or through social media channels, or perhaps by reading a little about it through an article written by a major publication … but unfortunately, that doesn’t mean that the information that you have received is necessarily accurate. I cannot count how many times over the last year I’ve seen what seems to be purposeful misstatements of the law … which then gets repeated over and over again – perhaps to fit some sort of political agenda. After all, each side of the isle so to speak is attacking the law, but curiously for different reasons. While I absolutely despise lumping people into categories, political or otherwise, the best way I can describe the ongoing debate is that the liberals believe that there is not enough censoring going on, and the conservatives think there is too much censorship going on. Meanwhile, you have the platforms hanging out in the middle often struggling to do more, with less…

In this article I will try to explain why I believe it is important that even lay people understand Section 230 and dispel some of the most common myths that continually spread throughout the Internet as gospel … even from our own Congressional representatives.

WHY LAY PEOPLE SHOULD CARE ABOUT SECTION 230

Not everyone who reads this will remember what it was like before the Internet. If you’re not, ask your elders what it was like to be “talked at” by your local television news station or news paper. There was no real open dialog absent face to face or over the telephone communications. Your audience was limited in who you would get to share information with. Even if you wrote a “letter to the Editor” at a local newspaper it didn’t mean that your “opinion” was necessarily going to be posted. If you wanted to share a picture, you had to actually use a camera and film, take it to a developer, wait two weeks, pay for the developing and pray that your pictures didn’t suck. Can’t tell you how many blurry photographs I have in a shoe box somewhere. Then you had to mail, hand out, or show your friends in person. And don’t even get me started about a phone that was stuck to the wall and your “privacy” was limited to having a long phone chord that might stretch into the bathroom so you could shut the door. If you’re old end enough to remember that, and are nodding your head in agreement … I encourage you to spend some time remembering what that was like. It seems that us non-digital natives are at a point in life that we take the technology we have for granted; and the digital natives (meaning they were born with all of this technology) don’t really know the struggles of life without it.

If you like being able to share information freely, and to comment on information freely, you absolutely should care about what many refer to as “Section 230.” So many of my friends, family and colleagues say “I don’t understand Section 230 and I don’t care to … that’s your space” yet these are the people that I see posting content online about their business via LinkedIn or other social media platforms, sharing reviews of businesses they have been to, looking up information on Wikimedia, sharing their general opinion and/or otherwise dialog and debate over topics that are important to them, etc. In a large way, whether you know it or not, Section 230 has powered your ability to interact online in this way and has drastically shaped the Internet as we know it today.

IN GENERAL: SECTION 230 EXPLAINED

The Communications Decency Act (47 U.S.C. ยง 230) (often referred to as โ€œSection 230โ€ or โ€œCDAโ€ or even “CDA 230”), in brief, is a federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as โ€œplatformsโ€) from liability from third-party content.ย  Generally speaking, if the platform didn’t actually create the content, they typically aren’t liable for it. Indeed, there are a few exceptions, but for now, we’ll keep this simple. Platforms that allow interactive third-party content are often referred to as user generated content (โ€œUGCโ€) sites.ย  Facebook, Twitter, Snapchat, Reddit, Tripadvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform โ€œgiantsโ€ arenโ€™t the only platforms on the internet that have social utility and benefit from the CDA.ย  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. Suffice it to say, there are WAY more little guys than there are big guys, or “Big Tech” as some refer to it.

If you’re looking for some sort of a deep dive on the history of the law, I encourage you to pick up a copy of Jeff Kosseff’s book titled The Twenty-Six Words That Created The Internet. It’s a great read!

ONGOING “TECHLASH” WITH SECTION 230 IN THE CROSS-HAIRS

One would be entirely naรฏve to suggest that the Internet is perfect. If you ask me, it’s far from perfect. I readily concede that indeed there are harms that happen online. To be fair, harms happen offline too and they always have. Sometimes humans just suck. I’ve discussed a lot of this in my ongoing blog article series Fighting Fair on the Internet. What has been interesting to me is that many seem to want to blame people’s bad behavior on technology and to try and hold technology companies liable for what bad people do using their technology.

I look at technology as a tool. By analogy, a hammer is a tool yet we don’t hold the hammer manufacturing company or the store that sold the hammer to the consumer liable when a bad guy goes and beats someone to death with it. I imagine the counter-argument is that technology is in the best position to help stop the harms. Perhaps that may be true to a degree (and I believe many platforms do try to assist by moderating content and otherwise setting certain rules for their sites) but the question becomes, should they actually be liable? If you’re a Section 230 “purist” the answer is “No.” Why? Because Section 230 immunizes platforms from liability for the content that other people say or do on their platforms. Platforms are still liable for the content they choose to create and post or otherwise materially contribute to (but even that is getting into the weeds a little bit).

The government, however, seems to have its’ own set of ideas. We already saw an amendment to Section 230 with FOSTA (the anti-sex trafficking amendment). Unfortunately, good intentions often make for bad law, and, in my opinion, FOSTA was one of those laws which has been arguably proven to cause more harm than good. I could explain why, but I’ll save that discussion for another time.

Then, in February of 2020, the DOJ had a “workshop” on Section 230. I was fortunate enough to be in the audience in Washington, D.C. where it was held and recently wrote an article breaking down that “workshop.” If you’re interested in all the juicy details, feel free to read that article but in summary it basically was four hours’ worth of : humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minorityโ€ฆbecause bad people.

Shortly thereafter the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 or EARN IT Act of 2019-2020 Bill was dropped which is designed to prevent the online sexual exploitation of children. While this sounds noble (FOSTA did too) when you unpack it all, and look at the bigger picture, it’s more government attempts to mess with free speech and online privacy/security in the form of yet another amendment to Section 230 under the guise of being “for the children.” I have lots of thoughts on this, but I will save this for another article another day too.

This brings us to the most recent attack on Section 230. The last two (2) weeks have been a “fun” time for those of us who care about Section 230 and its application. Remember how I mentioned above that some conservatives are of the opinion that there is too much censorship online? This often refers to the notion that social media platforms (Facebook, Twitter, and even Google) censor or otherwise block conservative speech. Setting aside whether this actually happens or not (I’ve heard arguments pointing both directions on this issue) President Trump shined a big light on this notion.

Let me first start off by saying that there is a ton of misinformation that is shared online. It doesn’t help that many people in society will quickly share things without actually reading it or conducting research to see if the content they are sharing has any validity to it but will spend 15 minutes taking a data mining quiz only to find out what kind of a potato they are. As a side note, I made that up in jest and then later found out that there is a quiz to find out what kind of potato you are. Who knew the 2006 movie Idiocracy was going to be so prophetic? Although, I can’t really say this is somehow just something that happens online? Anyone that ever survived junior high and high school knows that gossip is often riddled with misinformation and somehow we seem to forget about the silliness that happens offline too. The Internet, however, has just given the gossipers a megaphone … to the world.

Along with other perceived harmful content, platforms have been struggling with how to handle such misinformation. Some have considered adding more speech by way of notifications or “labels” as Twitter calls them, to advise their users that the information may be wholly made up or modified, shared in a deceptive manner, likely to impact public safety or otherwise cause serious harm. Best I could tell, at least as far as Twitter goes, this seems to be a relatively new effort. Other platforms like Facebook have apparently resorted to taking people’s accounts down, putting odd cover ups over photos, etc. on content they deem “unworthy” for their users. Side note: While ideal in a perfect world, I’m not personally a fan of social media platforms fact checking because: 1) it’s very hard to be an arbiter of truth; 2) it’s incredibly hard to do it at scale; 3) once you start, people will expect you to do it on every bit of content that goes out – and that’s virtually impossible; and 4) if you fail to fact check something that turns out to be false or otherwise misleading, one might assume that such content is accurate because they come to rely on the fact checking. And who checks the fact checkers? Not that my personal opinion matters, but I think this is where this bigger tech companies have created more problems for themselves (and arguably all the little sites that rely on Section 230 to operate without fear of liability).

So what kicked off the latest “Section 230 tirade”? Twitter “fact checked” President Trump in two different tweets on May 26th, 2020 by adding in a “label” to the bottom of the Tweets (which you have to click on to actually see – they don’t transfer when you embed them as I’ve done here) that said “Get the facts about mail-in-ballots.” This clearly suggests that Twitter was in disagreement with information that the President Tweeted and likely wanted its users to be aware of alternative views.

https://twitter.com/realDonaldTrump/status/1265255845358645254?s=20

To me, that doesn’t seem that bad. I can absolutely see some validity to President Trump’s concern. I can also see an alternative argument, especially since I typically mail in my voting ballot. Either way, adding content in this way, versus taking it down altogether, seems like the route that provides people more information to consider for themselves, not less. In any event, if you think about it, pretty much everything that comes out of a politician’s mouth is subjective. Nevertheless, President Trump got upset over the situation and then suggested that Twitter was “completely stifling FREE SPEECH” and then made veiled threats about not allowing that to happen.

https://twitter.com/realDonaldTrump/status/1265427539008380928?s=20

If we know anything about this President, it is that when he’s annoyed with something, he will take some sort of action. President Trump ultimately ended up signing an Executive Order on “Preventing Online Censorship” a mere two (2) days later. For those that are interested, while certainly left leaning, and non-favorable to our commander in chief, Santa Clara Law Professor Eric Goldman provided a great legal analysis of the Executive Order, calling it “political theater.” Even if you align yourself with the “conservative” base, I would encourage you to set aside the Professor’s personal opinions (we all have opinions) and focus on the meat of the legal argument. It’s good.

Of course, and as expected, the Internet looses its mind and all the legal scholars and practitioners come out of the woodwork, commenting on Section 230 and the newly signed Executive Order, myself included. The day after of the Executive Order was signed (and likely President Trump read all the criticisms) he Tweeted out “REVOKE 230!”

https://twitter.com/realDonaldTrump/status/1266387743996870656?s=20

So this is where I have to sigh heavily. Indeed there is irony in the fact that the President is calling for the revocation of the very same law that allowed innovation and Twitter to even become a “thing” and which also makes it possible for him to reach out and connect to millions of people, in real time, in a pretty much unfiltered way as we’ve seen, for free because he has the application loaded on his smart phone. In my opinion, but for Section 230, it is entirely possible Twitter, Facebook and all the other forms of social media and interactive user sites would not exist today; at least not as we know it. Additionally, I find it ironic that President Trump is making free speech arguments when he’s commenting about, and on, a private platform. For those of you that slept through high school civics, the First Amendment doesn’t apply to private companies … more about that later.

As I said though, this attack on Section 230 isn’t just stemming from the conservative side. Even Joe Biden has suggested that Section 230 should be “repealed immediately” but he’s on the whole social media companies censor too little train which is completely opposite of the reasons that people like President Trump wants it revoked.

HOW VERY AMERICAN OF US

How many times have you heard that American’s are self-centered jerks? Well, Americans do love their Constitutional rights, especially when it comes to falling in love with their own opinions and the freedom to share those opinions. Moreover, when it comes to the whole content moderation and First Amendment debate, we often look at tech giants as purely American companies. True, these companies did develop here (arguably in large part thanks to Section 230) however, what many people fail to consider is that many of these platforms operate globally. As such, they are often trying to balance the rules and regulations of the U.S. with the rules and regulations of competing global interests.

As stated, Americans are very proud of the rights granted to them, including the First Amendment right to free speech (although after reading some opinions lately I’m beginning to wonder if half the population slept through or otherwise skipped high school civics class … or worse, slept through Constitutional Law while in law school). However, not all societies have this speech right. In fact, Europe’s laws value the privacy as a right, over the freedom of expression. A prime example of this playing out is Europe’s Right to Be Forgotten law. If you aren’t familiar, under this EU law, citizens can ask that even truthful information, but perhaps older, be taken down from the Internet (or in some cases not be indexed on EU search engines) or else the company hosting that information can face penalty.

When we demand that these tech giants cater to us, here in the United States, we are forgetting that these companies have other rules and regulations that they have to take into consideration when trying to set and implement standards for their users. What is good for us here in the U.S. may not be good for the rest of the world, which are also their customers.

SECTION 230 AND FIRST AMENDMENT MYTHS SPREAD LIKE WILDFIRE

What has been most frustrating to me, as someone who practices law in this area and has a lot of knowledge when it comes to the business of operating platforms, content moderation, and the applicability of Section 230, is how many people who should know better get it wrong. I’m talking about our President, Congressional representatives, and media outlets … so many of them, getting it wrong. And what happens from there? You get other people who regurgitate the same uneducated or otherwise purposefully misstatements in articles that get shared which further perpetuates the ignorance of the law and how things actually work.

For example, just today (June 8, 2020) Jeff Kosseff Tweeted out a thread that describes a history of the New York Times failing to accurately explain Section 230 in various articles and how one of these articles ended up being quoted by a NJ federal judge. It’s a good thread. You should read it.

MYTH: A SITE IS EITHER A “PLATFORM” OR A “PUBLISHER”

Contrary to so many people I’ve listened to speak, or articles that I’ve read, when it comes to online UGC platforms, there is no distinction between โ€œpublisherโ€ and a โ€œplatform.โ€ย  You arenโ€™t comparing the New York Times to Twitter.ย  Working for a newspaper is not like working for a UGC platform.ย  Those are entirely different business models … apples and oranges.ย Unfortunately, this is another spot where many people get caught up and confused.ย 

UGC platforms are not in the business of creating content themselves but rather in the business of setting their own rules and allowing third-parties (i.e., you and I here on this platform) to post content in accordance with those rules.  Even for those who point to some publications erring on the side of caution on 2006-2008 re editing UGC comments doesnโ€™t mean thatโ€™s how the law actually was interpreted.  We have decades worth of jurisprudence interpreting Section 230 (which is what the judicial branch does – interprets the law, not the FCC which is an independent organization overseen by Congress). UPDATE 1/5/2021 – although now there is debate on whether or not they can and as of October 21, 2020 the FCC seems to think they do have such right to interpret it.  Platforms absolutely have the right to moderate the content which they did not create and kick people off of their platform for violation of their rules. 

Think if it this way โ€“ have you ever heard your parents say (or maybe youโ€™ve said this to your own kids) โ€œMy house, my rules.  If you donโ€™t like the rules, get your own house.โ€  If anyone actually researches the history, thatโ€™s why Section 230 was created โ€ฆ to remove the moderatorโ€™s dilemma.  A platformโ€™s choice of what to allow, or disallow, has no bearing (for the sake of this argument here) on the applicability of Section 230.  Arguably, UGC platforms also have a First Amendment right to choose what they want to publish, or not publish. So even without Section 230, they could still get rid of content they didn’t deem appropriate for their users/mission/business model.

MYTH: PLATFORMS HAVE TO BE NEUTRAL FOR SECTION 230 TO APPLY

Contrary to the misinformation being spewed all over (including by government representatives โ€“ which I find disappointing) Section 230 has never had a โ€œneutralityโ€ caveat for protection.  Moreover, in the context of the issue of political speech, Senator Ron Wyden, who was a co-author for the law even stated recently on Twitter โ€œlet me make this clear: there is nothing in the law about political neutrality.โ€ 

You canโ€™t get much closer to understanding Congressional intent of the law than getting words directly from the co-author of the law. 

Quite frankly, there is no such thing as a โ€œneutral platform.โ€ That’s like saying a cheeseburger is a cheeseburger is a cheeseburger. Respectfully, some cheeseburgers from certain restaurants are just way better than others. Moreover, if we limited content on platforms to only what is lawful, i.e., a common carrier approach where the platforms would be forced to treat all legal content equally and refrain from discrimination, as someone that deals with content escalations for platforms, I can tell you that we would have a very UGLY Internet because sometimes people just suck or their idea of a good time and funny isn’t exactly age appropriate for all views/users.

MYTH: CENSORSHIP OF SPEECH BY A PLATFORM VIOLATES THE FIRST AMENDMENT

The First Amendment absolutely protects the freedom of speech.ย  In theory, you are free to put on a sandwich board that says (insert whatever you take issue with) and walk up and down the street if you want.ย  In fact, weโ€™re seeing such constitutionally protected demonstrations currently with the protesters all over the country in connection to the death of George Floyd. Peaceful demonstration (and yes, I agree, not all of that was “peaceful”) is absolutely protected under the First Amendment.ย 

What the First Amendment does not do (and this seems to get lost on people for some reason) is give one the right to amplification of that speech on a private platform.  One might wish that were the case, but wishful thinking does equal law. Unless and until there is some law, that passes judicial scrutiny, which deems these private platforms a public square subject to the same restrictions that is imposed on the government, they absolutely do not have to let you say everything and anything you want. Chances are, this is also explained in their Terms of Service, which you probably didn’t read, but you should.

If you’re going to listen to anyone provide an opinion on Section 230, perhaps one would want to listen to a co-author of the law itself:

Think of it this way, if you are a bar owner and you have a drunk and disorderly guy in you bar that is clearly annoying your other customers, would you want the ability to 86 the person or do you want the government to tell you that as long as you are open to the public you have to let that person stay in your bar even if you risk losing other customers because someone is being obnoxious? Of course you want to be able to bounce that person out! It’s not really any different for platform operators.

So for all of you chanting about how a platforms censorship of your speech on their platform is impacting your freedom of speech – you don’t understand the plain language of the First Amendment. The law is “Congress shall make no law … abridging the freedom of speech…” not “any person or entity shall make no rule abridging the freedom of speech…”, which is what people seem to think the First Amendment says or otherwise wants the law to say.

LET’S KEEP THE CONVERSATION GOING BUT NOT MAKE RASH DECISIONS

Do platforms have the best of both worlds โ€ฆ perhaps.  But what is worse?  The way it is now with Section 230 or what it would be like without Section 230?  Frankly, I choose a world with Section 230.  Without Section 230, the Internet as we know it will change. 

While weโ€™ve never seen what the Internet looks like without Section 230 I can imagine we would go to one of two options: 1) an Internet where platforms are afraid to moderate content and therefore everything and anything would go up, leaving us with a very ugly Internet (because people are unfathomably rude and disgusting – I mean, content moderators have suffered from PTSD for having to look at what nasty humans try to share); or 2) an Internet where platforms are afraid of liability and either UGC sites will cease to exist altogether or they may go to a notice and take down model where as soon a someone sees something they are offended by or otherwise don’t like, they will tell the platform the information is false, defamatory, harassing, etc. and that content would likely automatically come down. The Internet, and public discussion, will be at the whim of a heckler’s veto. You think speech is curtailed now? Just wait until the society of “everyone is offended” gets a hold of it.

As I mentioned to begin with, I don’t think that the Internet is perfect, but neither are humans and neither is life. While I believe there may be some concessions to be had, after in-depth studies and research (after all, we’ve only got some 24 years of data to work with and those first years really don’t count in my book) I think it foolish to be making rash decisions based upon political agendas. If the politicians want their own platform where they aren’t going to be “censored” and the people have ease of access to such information … create one! If people don’t like that platforms like Twitter, Facebook, or Google are censoring content … don’t use them or use them less. Spend your time and money with a platform that more aligns with your desires and beliefs. There isn’t one out there? Well, nothing is stopping you from creating your own version (albeit, I understand that it’s easier said than done … but there are platforms out there trying to make that move). That’s what is great about this country … we have the ability to innovate … we have options … well, at least for now.

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Breaking down the DOJ Section 230 Workshop: Stuck in the Middle With You

The current debate over Section 230 of the Communications Decency Act (47 U.S.C. ยง 230) (often referred to as โ€œSection 230โ€ or โ€œCDAโ€) has many feeling a bit like the lyrics from Stealers Wheel โ€“ Stuck in The Middle With You, especially the lines where it says โ€œclowns to the left of me, jokers to my right, here I am stuck in the middle with you.โ€ As polarizing as the two extremes of the political spectrum seem to be these days, so are the arguments about Section 230.  Arguably the troubling debate is compounded by politicians who either donโ€™t understand the law, or purposefully make misstatements about the law in attempt to further their own political agenda.

For those who may not be familiar with the Communications Decency Act, in brief, it is federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as โ€œplatformsโ€) from liability from third-party content.  Platforms that allow third-party content are often referred to as user generated content (โ€œUGCโ€) sites.  Facebook, Twitter, Snapchat, Reddit, TripAdvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform โ€œgiantsโ€ arenโ€™t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. 

So, whatโ€™s the debate over?  Essentially the difficult realities about humans and technology.  I doubt there would be argument over the statement that the Internet has come a long way since the early days of CompuServe, Prodigy and AOL. I also believe that there would be little argument that humans are flawed.  Greed was prevalent and atrocities were happening long before the advent of the Internet.  Similarly, technology isnโ€™t perfect either.  If technology were perfect from the start, we wouldnโ€™t ever need updates โ€ฆ version 1.0 would be perfect, all the time, every time.  That isnโ€™t the world that we live in though โ€ฆ and thatโ€™s the root of the rub, so to speak.

Since the enactment of the CDA, an abundance of lawsuits have been initiated against platforms, the results of which further defined the breadth of the law.  For those really wanting to learn more and obtain a more historical perspective on how the CDA came to be, one could read Jeff Kosseffโ€™s book called The Twenty Six Words That Created the Internet.  To help better understand some of the current debate over this law which will be discussed shortly, this may be a good opportunity to point out a few of the (generally speaking) practical implications of Section 230:

  1. Unless a platform wholly creates or materially contributes to content on its platform, it will not be held liable for the content created by a third-party.  This immunity from liability has also been extended to other tort theories of liability where it is ultimately found that such theory stems from the third-party content.
  2. The act of filtering content by a platform does not suddenly transform it into a โ€œpublisherโ€ aka the person that created the content in the first place, for the purposes of imposing liability.
  3. A platform will not be liable for their decision to keep content up, or take content down, regardless of whether such information may be perceived as harmful (such as content alleged to be defamatory). 
  4. Injunctive relief (such as a take down order from a court) is legally ineffective against a platform if such order relates to content that they would have immunity for.

These four general principals are the result of litigation that ensued against platforms over the past 23+ years. However, a few fairly recent high-profile cases stemming from atrocities, and our current administration (from the President down), has put Section 230 in the crosshairs and desires for another amendment.  The question is, amendment for what?  One side says platforms censor too much, the other side says platforms censor too little, platforms and technology companies are being pressured to  implement stronger data privacy and security for their users worldwide while the U.S. government is complaining about measures being taken are too strong and therefore allegedly hindering their investigations.  Meanwhile the majority of the platforms are singing โ€œstuck in the middle with youโ€ trying to do the best they can for their users with the resources they have, which unless youโ€™re โ€œbig Internet or big techโ€ is typically pretty limited.  And frankly, the Mark Zuckerbergโ€™s of the world donโ€™t speak for all platforms because not all platforms are like Facebook nor do they have the kind of resources that Facebook has.  When it comes to implementation of new rules and regulations, resources matter.

On January 19, 2020 the United States Department of Justice announced that they would be hosting a โ€œWorkshop on Section 230 of the Communications Decency Actโ€ on February 19, 2020 in Washington, DC.  The title of the workshop โ€œSection 230 โ€“ Nurturing Innovation or Fostering Unaccountability?โ€  The stated purpose of the event was to โ€œ[D]iscuss Section 230 โ€ฆ its expansive interpretation by the courts, its impact on the American people and business community, and whether improvements to the law should be made.โ€  The title of the workshop was intriguing because it seemed to suggest that the answer was one or the other when the two concepts are not mutually exclusive.

On February 11, 2020 the formal agenda for the workshop (the link to which has since been removed from the governmentโ€™s website) was released.  The agenda outlined three separate discussion panels:

  • Panel 1:  Litigating Section 230 which was to discuss the history, evolution and current application of Section 230 in private litigation;
  • Panel 2: Addressing Illicit Activity Online which was to discuss whether Section 230 encourages or discourages platforms to address online harms, such as child exploitation, revenge porn, and terrorism, and its impact on law enforcement; and
  • Panel 3: Imagining the Alternative which was to discuss the implications on competition, investment, and speech of Section 230 and proposed changes. 

The panelists were made up of legal scholars, trade associations and a few outside counsel who represent plaintiffs or defendants.  More specifically, the panels were filled with many of the often empaneled Section 230 folks including legal scholars like Eric Goldman, Jeff Kosseff; Kate Klonik, Mary Ann Franks, and staunch anti- Section 230 attorney Carrie Goldberg, a victimโ€™s rights attorney that specializes in sexual privacy violations.  Added to the mix was also Patrick Carome who is famous for his Section 230 litigation work, defending many major platforms and organizations like Twitter, Facebook, Google, Craigslist, AirBnB, Yahoo! and the Internet Association.  Other speakers included Annie McAdams, Benjamin Zipupsky, Doug Peterson, Matt Schruers, Yiota Souras, David Chavern, Neil Chilson, Pam Dixon, and Julie Samuels.

A review of the individual panelistโ€™s bios would likely signal that the government didnโ€™t want to include the actual stakeholders, i.e., representation from any platformโ€™s in-house counsel or in-house policy.  While not discounting the value of the speakers scheduled to be on panel, one may find it odd that those who deal with the matters every day, who represent entities that would be the most impacted by modifications to Section 230, who would be in the best position to determine what is or is not feasible to implement in the terms of changes, if changes to Section 230 were to happen, had no seat at the discussion table.  This observation was wide spread โ€ฆ much discussion on social media about the lack of representation of the true โ€œstakeholdersโ€ took place with many opining that it wasnโ€™t likely to be a fair and balanced debate and that this was nothing more than an attempt by U.S. Attorney General William Barr to gather support for the bill relating to punishing platforms/tech companies for implementing end-to-end encryption.  One could opine that the Bill really has less to do with Section 230 and more to do with the Government wanting access to data that platforms may have on a few perpetrators who happen to be using a platform/tech service.

If you arenโ€™t clear on what is being referenced above, it bears mentioning that there is a Bill titled โ€œEliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019โ€ aka โ€œEARN IT Act of 2019โ€ that was proposed by Senator Lindsey Graham.  This bill came approximately two weeks after Apple was ordered by AG Barr to unlock and decrypt the Pensacola shooterโ€™s iPhone.  When Apple responded that they couldnโ€™t comply with the request, the government was not happy.  An article written by CATO Institute stated that โ€œDuring a Senate Judiciary hearing on encryption in December Graham issued a warning to Facebook and Apple: โ€˜this time next year, if we havenโ€™t found a way that you can live with, we will impose our will on you.โ€™โ€  Given this information, and the agenda topics, the timing of the Section 230 workshop seemed a bit more than coincidence.  In fact, according to an article in Minnesota Lawyer, Professor Eric Goldman pointed out that the โ€œDOJ is in a weird position to be convening a roundtable on a topic that isnโ€™t in their wheelhouse.โ€

As odd as the whole thing may have seemed, I had the privilege of attending the Section 230 โ€œWorkshopโ€.  I say โ€œworkshopโ€ because it was a straight lecture without the opportunity for there to be any meaningful Q&A dialog from the audience.  Speaking of the audience, of the people I had direct contact with, the audience consisted of reporters, internet/tech/first amendment attorneys, in-house counsel/representatives from platforms, industry association representatives, individual business representatives, and law students.  The conversations that I personally had, and personally overheard, was suggestive that the UGC platform industry (the real stakeholders) were all concerned or otherwise curious about what the government was trying to do to the law that shields platforms from liability for UGC.

PANEL OVERVIEW:

After sitting through nearly four hoursโ€™ worth of lecture, and even though I felt the discussion to be a bit more well-rounded than I anticipated, I still feel that the entire workshop could be summarized as follows: โ€œhumans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.โ€

Perhaps that is a bit of an oversimplification but honestly, if you watch the whole lecture, thatโ€™s what it boils down to.

The harms discussed during the different panels included:

  • Libel (brief mention)
  • Sex trafficking (Backpage.com, FOSTA, etc.)
  • Sexual exploitation of children (CSAM)
  • Revenge porn aka Non-Consensual Pornography aka Technology Facilitated Harassment
  • Sale of drugs online (brief mention)
  • Sale of alleged harmful products (brief mention)
  • Product liability theory as applied to platforms (ala Herrik v. Grindr)

PANEL #1:

In traditional fashion, the pro-Section 230 advocates explained the history of the CDA, how it is important to all platforms that allow UGC, not just โ€œbig techโ€ and resonated on the social utility of the Internet โ€ฆ platforms large and small.  However, the anti-Section 230 panelists pointed to mainly harms caused by platforms (though not elaborated on which ones) by not removing sexually related content (though defamation was a short mention in the beginning). 

Ms. Adams seemed to focus on sex trafficking โ€“ touching on how once Backpage.com was shut down that a similar close site started up in Amsterdam. She referred to the issues she was speaking about as a โ€œpublic health crisis.โ€ Of course, Ms. Goldberg raised argument relating to the prominent Herrik v Grindr case wherein she argued a product liability theory as a work around Section 230. That case ended when writ was denied by the U.S. Supreme Court in October of 2019. Iโ€™ve heard Ms. Goldberg speak on this case a few times and one thing she continually harps on is the fact that the Grindr didnโ€™t have way to keep Mr. Herrikโ€™s ex from using their website. She seems surprised by this. As someone who represents platforms, it makes perfect sense to me. We must not forget that people can create multiple user profiles, from multiple devices, from multiple IP addresses, around the world. Sorry, Plaintiff attorneysโ€ฆthe platformsโ€™ crystal ball is in the shop on these issues … at least for now. Donโ€™t misunderstand me. I believe Ms. Goldberg is fighting the good fight, and her struggle on behalf of her clients is real! I admire her work and no doubt she sees it with a lens from the trenches she is in. That said, we canโ€™t lose sight of reality of how things actually work versus how weโ€™d like them to work.

PANEL #2:

There was a clear plea from Ms. Franks and Ms. Souras for something to be done about sexual images, including those exploiting children.  I am 100% in agreement that while 46 states have enacted anti โ€œrevenge pornโ€ or better termed Non-Consensual Pornography laws, such laws arenโ€™t strong enough because of the malicious intent requirement.  All a perpetrator has to say is โ€œI didnโ€™t mean to harm victim, I did it for entertainmentโ€ or another seemingly benign purpose and poof โ€“ case closed.โ€  That struggle is difficult! 

No reasonable person thinks these kinds of things are okay yet there seemed to be an argument that platforms donโ€™t do enough to police and report such content.  The question becomes why is that?  Lack of funding and resources would be my guessโ€ฆeither on the side of the platform OR, quite frankly, on a under-funded/under-resourced government or agency to actually appropriately handle what is reported.  What would be the sense of reporting unless you knew for sure that content was actionable for one, and that the agency it is being reported to would actually do anything about it?

Interestingly, Ms. Souras made the comment that after FOSTA no other sites (like Backpage.com) rose up.  Curiously, that directly contradicted Ms. Adamsโ€™s statement about the Amsterdam website popping up after Backpage.com was shut down.  So which is it?  Pro-FOSTA statements also directly contradicts what Iโ€™ve heard last October at a workshop put on by ASUโ€™s Project Humanities entitled โ€œEthics and Intersectionality of the Sext Tradeโ€ which covered the complexities of sex trafficking and sex work.  Problems with FOSTA was raised during that workshop.  Quite frankly, I see all flowery statements about FOSTA as nothing more than trying to put lipstick on a pig; trying to make a well-intentioned, emotionally driven, law look like it is working when it isnโ€™t.

Outside of the comments by Ms. Franks and Ms. Souras, AG Doug Peterson out of Nebraska did admit that the industry may self-regulate and sometimes that happens quickly, but he still complained that the state criminal law preemption makes his job more difficult and advocated for an amendment to include state and territory criminal law to the list of exemptions.  While that may sound moderate, the two can be different and arguably such amendment would be overbroad when you are only talking about sexual images.  Further, the inclusion of Mr. Peterson almost seemed as a plug in for a subtle push about how the government allegedly canโ€™t do their job without modification to Section 230 โ€“ and I think a part of the was leaning towards, while not making a big mention about it, was the end-to-end encryption debate.  In rebuttal to this notion, Matt Schruers suggested that Section 230 doesnโ€™t need to be amended but that the government needs more resources so they can do a better job with the existing laws, and encouraged tech to work to do better as they can โ€“ suggesting efforts from both sides would be helpful

One last important point made during this panel was Kate Klonik making the distinction between the big companies and other sites that are hosting non-consensual pornography.  It is important to keep in mind that different platforms have different economic incentives and that platforms are driven by economics.  I agree with Ms. Klonik that we are in a massive โ€œnorm settingโ€ period where we are trying to figure out what to do with things and that we canโ€™t look to tech to fix bad humans (although it can help).  Sometimes to have good things, we have to accept a little bad as the trade-off.

PANEL #3

This last panel was mostly a re-cap of the benefits of Section 230; the struggles that we fact when trying to regulate with a one-size fits all mentality and, I think most of the panelists seem to be agreeing that there needs to be some research done before we go making changes because we donโ€™t want unintended consequences.  That is something Iโ€™ve been saying for a while and reiterated during the ABAโ€™s Forum on Communications Law Digital Communications Committee hosted a free CLE titled โ€œSummer School: Content Moderation 101โ€ wherein Jeff Kosseff and I, in a moderated panel by Elisa Dโ€™Amico, Partner at K&L Gates, discussed Section 230 and a platformโ€™s struggle with content moderation.  Out of this whole panel, the one speaker that had most people grumbling in the audience was David Chavern who is the President of News Media Alliance.  When speaking about solutions, Mr. Chavern likened Internet platforms to that of traditional media as if he was comparing two oranges and opined that platforms should be liable just like newspapers.  Perhaps he doesnโ€™t understand the difference between first party content and third-party content.  The distinction between the two is huge and therefore I found his commentary to be the least relevant and helpful to the discussion. 

SUMMARY:

In summary, there seem to be a few emotion evoking ills in society (non-consensual pornography, exploitation of children, sex trafficking, physical attacks on victims, fraud, and the drug/opioid crisis) that the government is trying to find methods to solve.  That said, I donโ€™t think amending Section 230 is the way to address that unless and until there is reliable and unbiased data that would suggest that the cure wonโ€™t be worse than the disease. Are the ills being discussed really prevalent, or do we just think they are because they are being pushed out through information channels on a 24-hour news/information cycle?

Indeed, reasonable minds would agree that we, as a society, should try and stop harms where we can, but we also have to stop regulating based upon emotions.  We saw that with FOSTA and arguably, it has made things more difficult on law enforcement, victims alike and has had unintended consequences, including chilling speech, on others.  You simply cannot regulate the hate out of the hearts and minds of humans and you cannot expect technology to solve such a problem either.  Nevertheless, that seems to be the position of many of the critics of Section 230.

For more reading and additional perspectives on the DOJ Section 230 Workshop, check out these additional links:

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Donโ€™t Let Scammers During COVID-19 Fool You!

If COVID-19 wasnโ€™t stressful enough, now you have to watch out for scammers trying to take advantage of you. Below are a few tips:

  • Watch out for any links that get texted to your phone that promise to track coronavirus (through an app or otherwise). This might be malware designed to spy on you or get other information such as logins and passwords.
  • Watch our for links in random emails talking about the coronavirus. Phishing attempts are running rampant right now. If you arenโ€™t sure about a link in an email you get, donโ€™t click on it. If you arenโ€™t sure about an email thatโ€™s in your inbox, simply call the company to ensure itโ€™s a legitimate email and safe to open. Better to make a phone call than be sorry.
  • Understand that there is a flood of disinformation/misinformation about the virus, including remedies, cures, etc. This is especially true among the naturopath/DIY groups. If it is not coming from a reputable source (local hospital, your doctorโ€™s office, the CDC, WHO, etc.) please donโ€™t share it. If you do share information, cite the source that you obtained the information from so others can determine reliability of the information. Remember, anyone can buy a domain and anyone can make a meme.
  • If you receive a call from someone claiming to be from a charity, asking for personal information of financial information, hang up. If you want to give to a charity, go directly to their website. Also, only go to known charities. Just because a website looks like a “charity” doesn’t mean it is. Again, anyone can buy a domain and make a website.
  • If random strangers are showing up at your house, suggesting they are there to do coronavirus testing, etc., do not let them in your house! Ask for credentials/information and then call the organization that they say they are with to confirm they are who they say they are. Remember, anyone can lift a picture or information off of a website and make a fake badge, etc.

Some related reading:

https://arstechnica.com/information-technology/2020/03/the-internet-is-drowning-in-covid-19-related-malware-and-phishing-scams/

https://www.forbes.com/sites/thomasbrewster/2020/03/18/coronavirus-scam-alert-covid-19-map-malware-can-spy-on-you-through-your-android-microphone-and-camera/

https://www.usatoday.com/story/opinion/2020/03/17/fda-chief-stop-using-unapproved-products-claiming-prevent-coronavirus-column/5041971002/

https://www.military.com/daily-news/2020/03/16/army-white-house-issue-warnings-about-coronavirus-hoaxes-and-scams.html

Lexington PD advises of COVID-19 related phone scam

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

“Internet Law” explained

For some reason, every time one says “lawyer” people tend to think of criminal law, family law or personal injury law.ย  Perhaps because those are very common.ย  Most people even understand the concept of a corporate or business lawyer, someone who handles trust and estates, or even one that handles intellectual property.ย  However, when we say “Internet Law” many people get the most confused look on their face and say: “What the heck is that?” If that is you, you’re in good company.ย  And, to be fair, the Internet really hasn’t been around all that long.

If you were to read the “IT law” page on Wikipedia you’d see a section related to “Internet Law” but even that page falls a little short on a solid explanation – mostly because the law that surrounds the Internet is incredibly vast and is always evolving.

When we refer to “Internet Law” we are really talking about how varying legal principles and surrounding legislation influence and govern the internet, and it’s use.ย  For example, “Internet Law” can incorporate many different areas of law such as privacy law, contract law and intellectual property law…all which were developed before the internet was even a thing.ย  You also have to think how the Internet is global and how laws and application of those laws can vary by jurisdiction.

Internet Law can include the following:

  • Laws relating to website design
  • Laws relating to online speech and censorship of the same
  • Laws relating to how trademarks are used online
  • Laws relating to what rights a copyright holder may have when their images or other content is placed and used online
  • Laws relating to Internet Service Providers and what liabilities they may have based upon data they process or store or what their users do on their platforms
  • Laws relating to resolving conflicts over domain names
  • Laws relating to advertisements on websites, through apps, and through email
  • Laws relating to how goods and services are sold online

As you can see just from the few examples listed above, a lot goes into “Internet Law” and many Internet Law attorneys will pick only a few of these areas to focus on because it can be a challenge just to keep up.ย  Indeed, unlike other areas of law, “Internet Law” is not static and is always evolving.

Do you think you have an Internet Law related question? If you are in the state of Arizona and are looking for that solid โ€œfriend in the lawyering businessโ€ considerย Beebe Law, PLLC! ย We truly enjoy helping our ย businessย and individual clients and strive to meet and exceed their goals! ย Contactย us today.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon. ย All legal questions should be directed to a licensed attorney in your jurisdiction.

 

 

 

 

Data Privacy: Do most people even deserve it?

Repeat after me: Everything connected online is hackable.ย  Nothing online is really ever totally private. Most everything about my online activity is likely being aggregated and sold.ย  This is especially true if the website is free for me to use.ย 

Okay, before we get going, realize that this article is not discussing things that weย wouldย like to think is relatively safe and secure…like banking and health records.ย  Even then, please repeat the statements above because even for those situations it still holds true.ย  What I’m going to talk about is the more run of the mill websites and platforms that everyone uses.

The truth of the matter is, most people never read a website’s terms of service or privacy policy and readily click the “I agree” or “I accept” buttonย without knowing if they have just agreed to give away their first born or shave their cat.ย  Or, to be more realistic, that a free to use website which you don’t have to spend a penny to use is likely to track your behavior so they can render you ads of products and services that you might be interested in and/or sell aggregated data and/or your email address to marketers or other businesses that might be interested in you as a customer or to learn more about consumer habits in general.ย  Hello people…NOTHING IS FREE!ย  Indeed, most humans are lazy as sh*t when it comes to all of that reading and so forth because really, who in the hell wants to read all that?ย  Hey, I’m guilty of it myself,ย  although since I write terms of service and privacy policies as a way to make a livingย sometimes I will read them for pure entertainment.ย  Don’t judge me…I’m a nerd like that.

We are quick to use, click or sign up on a website without knowing what it is that we are actually agreeing to or signing up for…because we want entertainment and/or convenience…and we want it NOW.ย  Talk about an instant gratification society right? Think about the following situations as an example: Go to the grocery store and buy ingredients then take another 35-40 minutes to make dinner or simply use an app to order pizza? Send someone a handwritten letter through the mail (snail mail) or shoot them an email? Sit down and write checks or schedule everything through bill-pay? Pick up a landline phone (they do still exist) and call someone or send them a text from your mobile device?ย  Go to the local box office and purchase tickets to your favorite concert or buy them online? Stand in line at the theater for tickets or pre-pay on an app ahead of time and walk right in using a scan code through that app? Remember and type in your password all the time or ask your computer or use your thumb print to remember it all?ย  Take pictures with a camera that has film, get it developed and send those images to family and friends or take pictures with your phone and instantly upload them to a social media platform like Facebook to share with those same people, for free? By now you should be getting my point…and that is that we want convenience, and technology has been great at providing that, but for that convenience we often forget the price that is associated with it, including a loss of data privacy and security.

Low and behold, and not surprisingly (to me anyway), something like the Facebook – Cambridge Analytica situation happens and Every. Damn. Person. Loses. Their. Mind!ย  Why? Well, because mainstream media makes it into a bigger story than it is…and suddenly everyone is “conveniently” all concerned about their “data privacy.”ย  So let me get this straight: You sign up for a FREE TO USE platform, literally spend most of your free time on said platform pretty much posting everything about yourself including who your relatives are, what you like and don’t like, the last meal you ate, your dirty laundry with a significant other, spend time trolling and getting into disputes on bullsh*t political post (that are often public posts where anyone can see them), check in at every place you possibly go, upload pictures of yourself and your family…all of this willingly (no one is holding a gun to your head) and you are surprised that they sell or otherwise use that data?ย  How do you think they are able to offer you all these cool options and services exactly? How do you think they are able to keep their platform up and running and FREE for you to use?ย  At what point does one have toย accept responsibility for the repercussions from using a website, signing up or clicking that “I agree” button?ย  Damn near ever website has a terms of service and privacy policy (if they don’t steer clear of them or send them my way for some help) and you SHOULD be reading it and understand it…or at least don’t b*tch when you end up getting advertisements as per the terms of service and privacy policy (that you didn’t bother to read)…or any other possible option that could be out there where someone might use your information for – including the possibility that it will be used for nefarious purposes.

I’m not saying that general websites/platforms that house such content shouldn’t have reasonable security measures in place and that terms of service and privacy policies shouldn’t be clear (though its getting harder and harder to write for the least common denominator).ย  But again, nothing is 100% secure – there will always be someone that will find away to hack a system if they really want to and it’s really your fault if you fail to read and understand a website or platforms terms of service and privacy policy before you use it or sign up for something.ย  Why should people scream and cry for the “head” of a platform or website when people freely give their data away? ย That’s like blaming the car dealership for theft when you take your fancy new car to a ghetto ass neighborhood, known for high crime and car theft, leave it parked on a dark street, unlocked and with the keys in it. ย โ€œBut they should have watned me it would get stolen!โ€ Wait! What?Okay, maybe that’s a little too far of an exaggeration but seriously, the internet is a blessing and a curse.ย  If you don’t know of the potential dangers, and you don’t take the time to learn them, perhaps you shouldn’t be on it?ย  Remember, entertainment and convenience is the reward for our sacrifice of data privacy and security.

You know who has a heightened level of privacy, doesn’t have social media accounts hacked, data isn’t mined from online habits and doesn’t get spammed to death?ย  My dad.ย  Why? He doesn’t get on computers let alone get online and he doesn’t even own a smart phone.ย  True story. ย The dude still has checks, writes hand written notes, and hunts for his meat and gardens for his vegetables. Can you say โ€œoff the gridโ€? ย Want heightened data privacy?ย  Be like dad.

Repeat after me: Everything connected online is hackable.ย  Nothing online is really ever totally private. Most everything about my online activity is likely being aggregated and sold and sold.ย  This is especially true if the website is free for me to use.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon. ย All legal questions should be directed to a licensed attorney in your jurisdiction.ย 

 

 

10 Online Safety Hacks You Can Implement Today

Every day you read about major companies, or even law firms, getting hacked.ย  Talk about some frustrating stuff! It’s even worse when it actually happens to you.ย  Of course, with the increase of technological convenience comes greater cyber security risk.ย  One of my personal favorite cyber security gurus and “Shark Tank” star Robert Herjavec recently provided insight for an article that outlined 10 safety hacks that are easy to implement if you aren’t already doing them.ย  What are those 10 safety hacks?ย  Continue reading…

Some of these seem pretty intuitive.ย  Others perhaps not so much but are a good idea.

  1. Enable multi-factor authentication (MFA) for all of your accounts.
  2. Cover internal laptop cameras.
  3. Don’t do any shopping or banking on public Wi-Fi networks.
  4. Ensure that websites are SSL secure (https instead of http) before making financial transactions online.
  5. Delete old, unused software applications and apps from your devices.
  6. Update your anti-virus software as soon as updates become available.
  7. Refresh your passwords every 30 days for all accounts and use unique passwords for each account.
  8. Update computer/mobile software regularly.
  9. Don’t click on unknown links or open unknown attachments.
  10. Change the manufacturer’s default passwords on all of your software.

One of my favorites is the “cover internal laptop cameras.”ย  I personally used to get made fun of because I would place a sticky note over the top of my camera on my computer.ย  I suppose it didn’t help that it was bright green (or hot pink) depending on what color sticky note I had handy so it drew attention until I was given a better one (a plastic slider made specifically for this purpose) at a networking event from Cox Business. Now it doesn’t seem so silly after all.

Another one that I know is important, but probably more difficult to do, is to “refresh your passwords every 30 days for all accounts and use unique passwords for each account.”ย  Holy moly!ย  Think of how many accounts have passwords these days?ย  Literally every different system/app/website that you use requires a password! One LinkedIn user listed as a “Cyber Security Specialist” for a software company offered the solution of a program like LastPass.ย  Apparently, according to this particular individual anyway, LastPass saves all of your passwords in a securely encrypted container on their servers and have many other built in safety features in the event of stolen or hacked data.ย  This way all you have to know is one password and LastPass will do the rest.ย  While surely there are other similar solutions out there, if you are interested, you can read more about LastPass on their How It Works page. Sounds pretty cool, right!?! It might help you break out of that password hell.

A little common sense plus adding in these 10 security hacks can go a long way! Do you have any security hacks to share? Have a favorite password protector that you use? Let us know in the comments!

If you are in the state of Arizona and are looking for that solid โ€œfriend in the lawyering businessโ€ considerย Beebe Law, PLLC! ย We truly enjoy helping our ย businessย clients meet and exceed their goals! ย Contactย us today.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon. ย All legal questions should be directed to a licensed attorney in your jurisdiction. ย 

Five Benefits to Keeping Your Business Lawyer in the Loop

Let’s face it, the word “lawyer” for many is akin to a four letter expletive that people are offended by. Typically because it reminds people of getting sued and/or having to shell out, often unexpectedly, loads of cash that they rather have spent elsewhere…like on a vacation. ย Similarly, like in all professions, not all lawyers are created equal, and not all lawyers really have their client’s financial interests at heart – after all, being a lawyer and having a law firm is a business. I personally pride myself on NOT taking advantage of my clients…giving them direction on how they can do things themselves and helping only where they REALLY need/want it…but after 18+ years in the legal field, I know that not all lawyers share my same client-friendly mindset. It is no wonder that people cringe at the thought of having to use a lawyer.

Lawyers don’t have to be a thorn in your side through. ย In fact, a good lawyer can be a business’s greatest adviser and advocate – keeping in mind that a job of a lawyer is to tell you what you NEED to hear which can sometimes be very different than what you WANT to hear. All businesses should have a lawyer or two that they keep in regular contact with and it should be part of your regular business operating budget.

Before you go thinking I’m crazy, here are a few reasons that keeping your lawyer updated on the goings on of your business is advantageous:

  1. Lead Generation:ย Your lawyer can often be your biggest cheerleader (and lead generator) for future customers. Chances are your lawyer is tapped into many different networks. ย You never know when someone they know will need your business’s products or services and a solid referral from your lawyer could be future revenue in your pocket.
  2. Idea Generator: An attorney that understands you, your business, and your goals can be an invaluable asset when it comes to creative thinking. ย Brainstorming on new ideas with your lawyer may prove to be helpful in that they may be able to think of concepts outside the box for your business that you may not have already thought about. ย What if that lawyer helps you generate the next million dollar idea?
  3. Cost Cutting: One thing that many lawyers are good at is organizing and streamlining processes – it’s part of the way we think. ย What if your lawyer was able to give you ideas on how to streamline an existing process that will considerably help cut costs moving forward? ย If a few hundred dollars for your lawyer’s time on the telephone could save you thousands of dollars in the next year, wouldn’t you do it? ย Sure you would. ย You’d be a fool not to.
  4. ย Risk Mitigation:ย When you brainstorm with your lawyer on a new business concept, they can often help you plan your road-map to reach your goals and help you navigate around pitfalls that you might not even think about. ย For example, when clients come to me talking about setting up a new business I always ask them the business name and ask if they have considered any reputation issues with that new business name. ย The same goes for contracting issues, employee issues, etc. To that end as well, there is a LOT of bad information being circulated around on the internet. Indeed it is wise to conduct your own research but don’t you think it prudent to have your research double-checked by someone who knows where to actually find the correct information when it comes to the law? As Dr. Emily So once said, “better information means better ideas, means better protection.”
  5. Cost Effective:ย It is a lot cheaper to keep your lawyer up to speed on your business as it grows, even if through a short monthly 15 minute call, than it is to try and ramp your attorney up (trying to teach them everything about your business, including policy changes and the like in a short amount of time) when you suddenly need advice in order to be reactive to a situation – like when you are named as a defendant in a lawsuit. ย When you are named as a defendant in a lawsuit, you typically only have 20 days (varies by court and jurisdiction) from the date that you are served with a complaint in order to determine what your defenses are and what sort of a response you will need to file. ย That process becomes a whole lot easier if your attorney already knows about you, your business, it’s policies and procedures, etc. ย It is also easier to to budget in a few hundred dollars a month to keep your attorney up to date then to get smacked with a request for a $20,000.00 retainer, most of that potentially being eaten up just “learning” about your business, and then having subsequent large litigation bills.

As you can see, there are many reasons to regularly communicate with your attorney and hopefully you would find it more advantageous and beneficial than paying your monthly insurance bill. As Benjamin Franklin once said, “an ounce of prevention is worth a pound of cure!”

If you are in the state of Arizona and are looking for that solid “friend in the lawyering business” considerย Beebe Law, PLLC! ย We truly enjoy helping our ย business clients meet and exceed their goals! ย Contactย us today.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon. ย All legal questions should be directed to a licensed attorney in your jurisdiction. ย 

 

ADA Compliance and Websites: Yes, it’s really a thing.

I’ve said it before…it seems like everyone today has a website. ย Whether you are a stay at home mom blogger, operate an e-commerce boutique shop, aย local mechanic shop with a basic website or a full blown tech company – chances are you are no stranger to the internet and websites. Websites are how people find and interact with you or your company. Depending on what your website is designed for, you may have more risks to consider. ย For example, as I recently discussed, if your website hosts third-party content, there are risks associated with that kind of a website. ย Similarly, if your website collects email addresses so that you can later market to them, that presents an email marketing risk. This article is going to briefly discuss a new potential risk for website operators – that is compliance with the Americans with Disabilities Act of 1990 (ADA).

You might be thinking: “How could a website become an issue with the ADA?” ย That was my initial reaction too until I considered people who are blind or have a hearing impairment. ย It’s easy to take for granted senses that we are used to having. ย Think of all the “closed captioned (cc) for the hearing impaired” text that we have heard/seen on the television in the past. ย Well, how does that work for those videos that you are making and posting to your website? ย How do people navigate your website if they can’t see? Until a recent conference I had never even thought about how a visually impaired person accesses the internet. ย I have since discovered that the visually impaired often access the internet through a special screen reader. ย JAWS seems to be the most popular and I found a few interesting YouTube videos that give a demonstration of the JAWS program from different perspectives. ย If you are curious, like I was and want a unique perspective that may help you with your website accessibility, you can see two of the links I found HERE and HERE. ย The second video is from a student’s perspective which has a lot of good insight – including difficulties with .pdf documents, etc.

The above examples coupled with the legal actions that have been taken against websites in relation to an ADA claim, and the fact that I am starting to see solicitations from Continuing Learning Education companies teaching attorneys how to initiate actions, sends a solid message that this is something people/businesses need to be thinking about as they move forward with their existing websites and/or build out ย new websites.

THINGS TO KNOW AND UNDERSTAND:

  • The Americans with Disabilities Act of 1990 (ADA) prohibits discrimination and ensures equal opportunity for persons with disabilities in employments, State and local government services, places of public accommodations, commercial facilities, and transportation.
  • These laws can be enforced by the Department of Justice (DOJ) through private lawsuits and indeed there are cases where the DOJ has specifically stated in rulings that websites should be designed so that they are accessible to those who have physical disabilities including vision and hearing.
  • The DOJ has already required some websites to modify their sites to comply with the ADA guidelines – see the Web Content Accessibility Guidelines (WCAG) 2.0.
  • There is no set required standards YET but itโ€™s expected soon and they may require compliance within 12 months from the date of publication of the new standards to the public register. ย If you have a big website, and perhaps a lot of changes that will need to be made, that isn’t a lot of time.

WHAT IS BEING LOOKED AT FOR COMPLIANCE?

WebAIM.org appears to be a pretty decent resource for information. ย They have a pretty comprehensive checklist that may assist you and your website developing team out, however, below is a few points for consideration:

Information and user interface components must be presentable to users in ways they can perceive.

  • Guideline 1.1: Provide text alternatives for any non-text content so that it can be changed into other forms people need online – think of large print, speech, symbols or simpler language.
  • Guideline 1.2: Provide captions and alternatives for multimedia.
  • Guideline 1.3: Create content that can be presented in different ways (for example a more simplistic layout) without losing information or structure.
  • Guideline 1.4: Make it easier for users to see and hear content including separating foreground from background.

User interface components and navigation must be operable.

  • Guideline 2.1: Make all functionality available from a keyboard.
  • Guideline 2.2: Provide users enough time to read and use content.
  • Guideline 2.3: Do not design content in a way that is known to cause seizures (like flashing content)
  • Guideline 2.4: Provide ways to help users navigate, find content, and determine where they are.

Information and the operation of user interface must be understandable.

  • Guideline 3.1: Make text content readable and understandable.
  • Guideline 3.2: Make web pages appear and operate in predictable ways.
  • Guideline 3.3: Help users avoid and correct mistakes.

Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.

  • Guideline 4.1: Maximize compatibility with current and future user agents, including assistive technologies.

WHAT IF MY WEBSITE ISN’T COMPLIANT? ย ย 

For websites that arenโ€™t compliant the following are some things you should consider:

  • Have a 24/7 telephone number serviced by a live customer service agent who can provide access to the information on the website โ€“ the phone number must be identified on the website and be accessible using a screen reader.
  • Consider starting to make adjustments to your website to help ensure you are compliant.

NEED HELP ENSURING COMPLIANCE?

It is always a good idea to get a formal legal opinion on these kinds of matters if in doubt. Being proactive is a far better position to be in than being reactive and in a time crunch and money might be tight. If you are in the state of Arizona, and need help with suggestions on how to help make your website ADA compliant or would like to discuss this topic generally so that you have a better understanding of how this issue might impact your business, Beebe Law, PLLCย can help! ย Contactย us today.

All information contained in this blog (www.beebelawpllc.blog.com) is meant to be for general informational purposes only and should not be misconstrued as legal advice or relied upon. ย All legal questions should be directed to a licensed attorney in your jurisdiction. ย 

 

 

Fighting Fair on the Internet – Part 9 |Troubles with Defamatory Online Reviews and Content Scrapers

Content scrapers are problematic for authors, defamation plaintiffs and website operators alike.

There is no doubt that there is typically a clash of interests between authors, defamation plaintiffs and the operators of websites who host public third-party content. ย Authors either want the information to stay or be removed; defamation plaintiffs want information removed from the website(s); and website operators, such as many of the online review websites, fight for the freedom of speech and transparency – often arguing, among many other things, that the information is in a public court record anyway so removal is moot. ย These kinds of arguments, often surrounding the application of federal law know as the Communications Decency Act, or Section 230 (which arguable provides that websites don’t have to remove content even if it is false and defamatory) are playing out in courts right now. ย One example is the case ofย Hassell v. Birdย which is up on appeal before the California Supreme Court relating to a posting on Yelp. ย However, in spite of these clashes of interests, there does seem to be a trend emerging where the author, the plaintiffs, and the websites, are actually standing in the same boat facing the the same troublemaker.

Providing some background and context…

COPYRIGHT AND POSTING AN ONLINE REVIEW: ย Many people are familiar with the term “copyright” and have a basic understanding that a copyright is a legal right that is created by the law that gives the creator of an original work limited exclusive rights for its use and distribution. ย Wikipedia has some decent general information if you are interested in learning more. ย For example, a guy who I will call John for the purpose of this story, can get on a computer and draft up a complaint about Jane and her company XYZ ย before he posts it online on aย review website. ย As it sits on John’s computer as written, John would own the copyright to that information. ย When John decides to post it online to a review website, depending on the website’s terms of service John may have assigned his copyright rights to the website in which he was posting on. ย So either John or the website may own the copyright to that content. ย That point is important for a few reasons, and there are arguments for and against such an assignment, but those issues are for another article some other time.

DEFAMATORY POSTING IS PUBLISHED ONLINE: ย Continuing with the story, let’s say that John makes a bad call in judgment (because he hasn’t sat through one of my seminars relating to internet use and repercussions from the same, or hasn’t read my article on not being THAT guy, and doesn’t realize how bad doing this really is) and decides to post his false and defamatory posting about Jane and XYZ to an online review website. ย It’s totally NOT COOL that he did that but let’s say that he did. ย Now that posting is online, being indexed by search engines like Google, and anyone searching for Jane or XYZ might be seeing John’s posting.

WHAT TO DO WITH THE DEFAMATORY POSTINGS: ย The internet tends to work at lightening speed and John’s post is sure to be caught on to by Jane or by someone who knows Jane or her company XYZ. ย As an aside, I always recommend that people and businesses periodically, like once a month, run searches about themselves or businesses just to see what pops up. ย It’s just a good habit to get into because if there is a problem you will want to address it right away – especially you think it is false and defamatory and want to take legal action because there are pretty strict statue of limitations on those – in many states only providing one year from the date of publication.ย  When Jane learns of the posting, maybe she knows who John is by what was said in the posting – and maybe she isn’t sure who posted it – but either way chances are she is likely going to seek legal help to learn more about her options. ย Many people in Jane’s position will want to threaten to sue the website…but it’s actually not that simple. ย Why? ย Because unless the website actually contributed to writing the stuff, which they most likely didn’t, then they can’t be held liable for that content. ย That’s the law here in the United States – the Communications Decency Act. ย Fortunately, while online defamation is a niche area of law, there are many attorneys who are well versed in online defamation around the country that are able to assist people who find themselves in this kind of a situation.

So by now you are probably wondering how in the world a defamed party and a website could both be standing in the same boat. ย I promise I am getting there but I felt the need to walk through this story for the benefit of those who don’t work in this field and have little to no clue what I am even talking about. ย Baby steps…I’m getting there.

A FIGHT FOR REMOVAL: ย As I pointed out in the beginning, arguably under the law, websites don’t have to remove the content even if it is found by a court or otherwise to be false and defamatory and that leaves plaintiffs in an awkward position. ย They want the information taken down from the internet because it’s alleged to be harmful. ย What can be done all depends on the website the content is on.

REPUTATION MANAGEMENT: ย Many people think that reputation management is the way to go. ย However, while reputation management can be helpful in some instances, and I’m not trying to knock those legitimate companies out there that can definitely help a company with increasing their advertising and image online, many find it only to be a temporary band-aid when trying to cover up negativity. ย Similarly, in some cases, some reputation management companies may employ questionable tactics such as bogus DMCAs or fake Court Orders. ย Yes, those situations are real – I actually just presented on that topic to a group of internet lawyers less than two months ago and I caution anyone who is using or considering a reputation management company that guarantees removal of content from the internet.

A WEBSITE’S INTERNAL POLICING:ย  The same law that protects websites from liability for third-party content is the same law that encourages self policing by providing for editorial discretion on what to post and not post. ย As such, some websites have taken their own proactive approach ย and created their own internal policing system where, depending on the circumstances and what was written, the website might find that the posting violated their terms of service and, within their discretion, take some sort of action to help a victim out. ย Not every website has this but it’s certainly worth checking into.

COURT ORDERS: ย Remember, a website, arguably per the law, doesn’t necessarily have to take a posting down regardless of what the court order says. ย Shocking, but this has been found to be true in many cases around the country. ย So what do websites do? ย Here are a few scenarios on how websites might consider a court order:

  • Some websites will, without question, accept a court order regardless of jurisdiction and remove content – even if it is by default which can mean that the defendant didn’t appear and defend the case. ย It’s worth while to note that some people won’t appear and defend because: 1) they never got notice of the lawsuit in the first place; 2) they didn’t have the knowledge to fight the case themselves; and 3) they didn’t have the resources to hire an attorney to fight the case – let’s face it – good lawyers are expensive! ย Even cheap lawyers are still expensive.
  • Someย websites will remove a posting only if there is some sort of evidence that supported the court order – like the defendant appeared and agreed to remove or even if there is a simple affidavit by the author who agrees that the information is false and is willing to remove it.
  • Some websites will only redact the specific content that has been found to be false and defamatory by the court based on evidence. ย This means that whatever opinions or other speech that would be protected under the law, such as the truth, would remain posted on the website.
  • And still, other websites won’t event bother with a court order because they are out of the country and/or just don’t give a crap. ย These types of websites are rumored to try and get people to pay money in order for something to be taken down.

COURT ORDER WHACK-A-MOLE WITH SEARCH ENGINES LIKE GOOGLE: ย One of the biggest trends is to get a court order for removal and send it inย to search engines like Google for de-indexing. ย What de-indexing does is it removes the specific URL in question from the search engine’s index in that particular country. ย I make this jurisdictional statement because countries in the European Union have a “Right to be Forgotten” law and search enginesย like Google are required to remove content from searches stemming from Europe but, that is not the law in the US. ย The laws are different in other countries and arguably, Google doesn’t have to remove anything from their searches in the US.ย  Going back to our story with John, Jane and company XYZ, if Jane manages to litigate the matter and get a court order for the removal of the URL to the posting from search engine index, then, in theory, Jane’s name or company wouldn’t be associated with the posting.

Now this all sounds GREAT, and it seems to be one of the better solutions employed by many attorneys on behalf of their clients, BUT there are even a few problems with this method and it becomes a game of legal whack-a-mole:

  1. A website could change the URL which would toss it back into the search engine’s index and make it searchable again. ย The party would either have to get a new court order or, at least, submit the court order again to the search engine with the new court order.
  2. If sending the Court Order to Google, Google will typically post a notice to their search results that a search result was removed pursuant to a court order and give a link to the Lumen Database where people can see specifically what URLs were removed from their index and any supporting documentation. ย This typically includes the court order which may, or may not, include information relating to the offending content, etc. ย Anyone can then seek out the court case information and, in many cases, even pull the subject Complaint from online and learn exactly what the subject report said and learn whether or not the case was heard on the merits or if the case was entered by default or some other court related process. ย Arguably, the information really isn’t gone fore those who are willing to do their homework.
  3. The first amendment and many state privilege laws allow the press, bloggers, etc. to make a story out of a particular situation so long as they quote exactly from a court record. ย No doubt a court record relating to defamation will contain the exact defamatory statements that were posted on the internet. ย So, for example, any blogger or journalist living in a jurisdiction that recognizes the privilege law, without condition on defamation, could write a story about the situation, post the exact content verbatim out of the court record as part of their story, and publish that story online, inclusive of the defamatory content, without liability.

The up-hill battle made WORSE by content scrapers.

With all that I have said above, which is really just a 10,000 foot view of the underlying jungle, poor Jane in my example has one heck of an up-hill battle regarding the defamatory content. ย Further, in my example, John only posted on one review website. ย  Now enter the content scrapers who REALLY muck up the system causing headache for authors, for defamation plaintiffs, and for website providers like review websites.

CONTENT SCRAPERS:ย  When I say “content scrapers,” for the purpose of this blog article, I am referring to all of these new “review websites” that are popping up all over who, to get their start, appear to be systematically scraping (stealing) the content of other review websites that have been around for a long time and putting it on their own websites. ย Why would anyone do this you ask? ย Well, I don’t know exactly but I could surmise that it: 1) content helps their rankings online which helps generate traffic to their websites; 2) traffic to a website helps bring in advertising dollars to the ads that are running on their websites; and 3) if they are out of country (which many appear to be outside of the United States) they don’t really give a crap and can solicit money for people who write and ask for content to be taken down. ย I sometimes refer to these websites as copycat websites.

CONTENT SCRAPERS CAUSE HEADACHES FOR AUTHORS: ย Many people have their favorite review website that they turn to to seek out information on – be it Yelp for reviews on a new restaurant they want to try, TripAdvisor for people’s experience with a particular hotel or resort, or any other online review websites…it’s a brand loyalty if you will. ย An author has the right to choose which website they are willing to post their content on and, arguably, that decision could be based in part on the particular website’s Terms of Service as it would relate to their content. ย For example, some websites will allow you to edit and/or remove content that you post while other websites will not allow you to remove or edit content once it is posted. ย I’d like to think that many people look ย to see how much flexibility is provided with respect to their content before they chose which forum to post it on.

When a copycat website scrapes/steals content from another review website they are taking away the author’s right to choose where their content is placed. ย Along the same lines, the copycat websites may not provide an author with the same level of control over their content. ย Going back to my John, Jane and XYZ example, if John posted his complaint about Jane on a website that allowed him to remove it at his discretion, it’s entirely possible that a pre-litigation settlement could be reached where John voluntarily agreed to remove his posting or, John decided to do so on his own accord after he cooled down and realized he made a big mistake posting the false and defamatory posting about Jane online. ย However, once a copycat website steals that content and places it on their website, John not only has to argue over whether or not he posted the content on another website but also may not be able to enter into a pre-litigation settlement or remove it at his own direction. ย In fact, there is a chance that the copycat website will demand money in order to take it down – and then, who knows how long it will even stay down. ย After all the copycat website doesn’t care about the law because stealing content is arguably copyright infringement.

CONTENT SCRAPERS CAUSE HEADACHE FOR DEFAMATION PLAINTIFFS: ย As discussed within this article, defamation plaintiffs have an up-hill battle when it comes to pursuing defamation claims and trying to get content removed from the internet. ย It almost seems like a losing battle but that appears to be the price paid for keeping the freedom of speech alive and keeping a level of transparency. ย Indeed, there is value to not stifling free speech. ย However, when people abuse their freedom of speech and cross the line online, such as John in my example, it makes life difficult for plaintiffs. ย It’s bad enough when people like John post it on one website, but when a copycat website then steal content from other review websites, and post it to their website(s), the plaintiff now has to fight the battle on multiple grounds. ย Just when a plaintiff will make headway with the original review website the stolen content will show up on another website. ย And, depending on the copycat website’s own Terms of Service, there is a chance that it won’t come down at all and/or the copycat website will demand money to have the content, that they stole, taken down. ย Talk about frustrating!

CONTENT SCRAPERS CAUSE HEADACHE FOR REVIEW WEBSITES: ย When it comes to online review sites, it’s tough to be the middle man…and by middle man I mean the operator of the review website. ย The raging a-holes of the world get pissed off when you don’t allow something “over the top” to be posted on their website and threaten to sue – arguing you are infringing on their first amendment rights. ย The alleged defamation victims of the world get pissed off when you do allow something to get posted and threaten to sue because well – they claimย they have been defamed and they want justice. ย The website operator gets stuck in the middle having zero clue who anyone is and is somehow supposed to play judge and jury to thousands of postings a month? ย Not that I’m trying to write myself out of a job but some of this stuff gets REALLY ridiculous and some counsel are as loony as their clients. ย Sad but true. ย And, if dealing with these kinds of issues wasn’t enough, enter the exacerbators, i.e, the copycat websites.

To begin with, website operators that have been around for a long time have earned their rankings. ย They have had to spend time on marketing and interacting with users and customers in order to get where they are – especially those that have become popular online. ย Like any business, a successful one takes hard work. ย Copycat websites, who steal content, are just taking a shortcut to the top while stepping on everyone else. ย They get the search engine ranking, they get the advertising dollars, and they didn’t have to do anything for it. ย To top it off, while the algorithms change so often and I am no search engine optimization (SEO) expert, I suspect that many of the original websites may see a reduction in their own rankings because of the duplicative data online. ย Reduced rankings and traffic may lead to a reduction in revenue.

I like to think that many website operators try hard to find a happy medium between freedom of speech and curtailing over the top behavior. ย That’s why websites have terms of service on what kind on content is allowed and not allowed and users are expected to follow the rules. ย When a website operator learns of an “over the top” posting or other situation that would warrant removal or redaction, many website operators are eager to help people. ย What is frustrating is when a website feels like they are helping a person only to get word days later that the same content has popped up elsewhere online – meaning a copycat website. ย In some instances people wrongly accuse the original website for being connected to the copycat website and the original website is left to defend themselves and try to convince the person their accusations are inaccurate. ย There is the saying of “no good deed goes unpunished” and I think that it is true for website operators in that position.

As the new-age saying goes “The Struggle is Real!”

I don’t know what the solution is to all of these problems. ย If you have kept up with this Fighting Fair on the Internet blog series that I have been working on over the past year, you know that I REALLY disapprove of people abusing the internet. ย I support the freedom of speech but I also think that the freedom of speech shouldn’t give one a license to be a-hole either. ย I don’t know that there is a bright line rule for what content should and should not be acceptable…but as Supreme Court Justice Potter Stewart said inย Jacobellis v. Ohioย back in 1964 to describe his threshold for obscenity, “I know it when I see it.” ย For me, after having seen so much through work and just in my own personal life, I think that is true. ย My hope is that if I keep talking about these issues and hosting educational seminars and workshops in effort to raise awareness perhaps people may join my mission. ย I firmly believe that we can ALL do better with our online actions…all we need is a little education and guidance.

Until next time friends…

 

Fighting Fair on the Internet | Part 8 – Don’t Be Sheep – Think Before You Click or Opine

The Information Highway Turned into a Mis-information Highway.

When did everyone lose their minds and all critical thinking skills? ย Are we nothing more than mindless drones who forgot how to conduct any research? ย Did they stop teaching these skills in school? ย And who in the heck decided it was a good idea to create a bunch of fake garbage and post it to the internet just to see how gullible everyone is or use it as a mechanism for revenge?

Before you share – be proactive and conduct a little research. THINK before you CLICK and SHARE.

Some of these examples may be older, but it’s going to prove a point:

No, Mark Zuckerberg is not giving away his stock to you people who share the message on your page. No one gets something for nothing…and what you are assuming he said isn’t what he said. No…it’s not been confirmed by some news station either. ย Did you actually see it on the news? ย No – don’t share it “just in case.”

No, Facebook is not likely to start charging for its use. Are you serious? They probably make way too much money off of advertising and selling data that you all give for free when do anything on the website…including playing all those mind games to find out what personality you have or what your first Facebook picture was.

No, Facebook isn’t going to make everything you posted public…which is comical because if its online, in a sense, it is already public…but that’s a different story for a different day. ย I’ve seen so many copy and paste differentย versions of a privacy scare (privacy hoax) that suggests the information was seen on the news and that if you copy and past some crap that talks about the UCC 1-308 and the Rome Statute you are advising Facebook that you don’t give them permission to use your data and that it is private and confidential. ย I’m sorry, but friggen really? ย You all have Google…how about learning what UCC 1-308 and the Rome Statute even refers to before making yourself look like a bone head and sharing it with other people who will do the same bone head thing by accepting it as gospel and sharing it – you know, “just in case.” ย Is the “just in case” one’s way of saying I’m way too lazy to research this, but since it uses legal words it must be legit, so I’m going to share it anyway? ย FYI – The UCC stands for the Uniform Commercial Code and governs the sale of goods and other commercial transactions like processing checks, etc. ย The Rome Statute has to deal with International Criminal Court.

No, Walmart is not likely to give you hundred dollar gift cards for sharing stuff on your page. Nope, Target isn’t likely going to do it either. What a brilliant subliminal advertising ploy that people are playing into though. ย It gets so many to share their name brand all over the internet without them having to do anything or spend any advertising dollars.

No, you’re not likely going to be given a chance to get a new car if you share some advertisement that was probably created by basement boy with time on his hands who wanted to see how many people would share his inside joke on your page. ย Did you bother to check in with the company to see if it was a legitimate offer they were running? ย Mmmm, my guess is probably not.

No, Redbull isn’t made out of “bull semen” or “bull pee.” It’s made up of all kinds of other things, including synthetic ingredients that arguably may not be the best for you but come on… bull semen? ย Seriously? ย Who comes up with this stuff?

No, your favorite “news” station isn’t telling you everything you ought to know. ย Indeed, your favorite news station has clipped, edited and skewed what was REALLY said…so you better go find the whole debate or story, educate yourself by taking the time to watch the whole thing (pray it was live otherwise it’s likely been edited to fit an agenda), and THEN form an opinion – to do anything less is to allow yourself to be swayed by only a tiny piece of information that may, or may not, have been taken out of context. Don’t be sheep. ย It’s amazing how many people take Main Stream Media (MSM) for the truth, the whole truth, and nothing but the truth. ย Having been interviewed a few times by MSM for different stories I can tell you the final product is swayed, chopped, hacked bullshit that looks and feels like a whole story – but it’s not. ย In fact, in my experience, it’s actually quite different.

No, a headline doesn’t always reflect the story. ย Ever heard of click-bait? ย That headline that get’s your attention, because it sounds like a train wreck, is often misleading as to what is actually written in the article. ย If you are going to click on that advertising dollar generating article, at least don’t be lazy. ย Read the entire article, and even then, take it with a grain of salt because it’s probably not the whole story. ย Don’t just read the headline and then share it will all your friends making assumptions based upon the headline alone.

No, that review you read may not be legitimate. ย Even if it is in multiple places all over the internet – it could all still be the same author or content scrapers. ย I’ll talk about that more in another blog eventually. ย Yes, many people write honest and legitimate reviews for legitimate reasons BUT just like you see on social media, there are review trolls. ย Review trolls are the people that suck at life so bad that they have to resort to making up or drastically embellish stories about their exes: ย ex-business partner, ex-employer, ex-employee, ex-boyfriend/girlfriend, ex-husband/wife and even former friends or family that they aren’t getting along with. ย Some people even resort to making up crap about themselves to gain sympathy of others (playing a victim is so easy these days) or might resort to making up stories about their competitors – because well, some people can’t stand to see others do better than them and misery loves company. ย Be sure to take everything with a grain of salt and remember to conduct some research – after all what you read (be it checking up on a person or a business) could be entirely made up and once it’s up…it can’t always come down. ย That goes for you too Human Resource hiring managers…

No, that meme that someone put together with their phone, incorrect math, spelling and all, isn’t necessarily true. ย I can understand sharing the funny ones for humor or satire, but some people post that sh!t like it’s the TRUTH! ย Holy cow – anyone can make that stuff up and then ya’ll go sharing it like it was written in the Encyclopedia Britannica. ย Oh wait, some are too young to even remember actual fact books like that. ย And when did “meme” even become a word? ย Seems about the equivalent of the ย so called words “bae” and “fleek” to me. ย I wonder if my parents thought the same thing about the use of the word “rad” back in the 80s – but then again, at least “rad” was just the shortened version of “radical.” ย That at least made some sense.

As a society, I feel that we need to stop being so damn lazy and accepting garbage, including MSM stories, posted on the web as truth without question. ย I’ve seen so many accept anything that is written on the Internet as gospel and then share and opine based on, well…nothing but bad information. WTF? ย You might as well take your brain out, play pat-a-cake with it, and stick it back in as mush. ย You were given a brain…so use it!

Until next time friends…