California Assembly Bill 1678 designed to protect against age discrimination gets tagged by Ninth Circuit on First Amendment grounds: IMDb.com, Inc. v. Becerra

On June 19, 2020 the Ninth Circuit Court of Appeals ruled that the content-based restrictions on speech contained within California’s Assembly Bill 1678 was facially unconstitutional because it “does not survive First Amendment scrutiny.”

I feel like if you live outside of glamorous places like California, New York and Florida, you may not be paying attention to laws being pushed by organizations like the Screen Actors Guild aka “SAG,” nevertheless … I try to keep my ear to the ground for cases that involve the First Amendment and Section 230 of the Communications Decency Act. This case happens to raise both issues, although only the First Amendment matter is addressed here.

For those that may be unfamiliar, IBDb.com is an Internet Movie Database which provides a free public website that includes information about movies, television shows, and video games. It also contains information information on actors and crew members in the industry which may contain the subject’s age or date of birth. This is an incredibly popular site, the court opinion noting that as of January of 2017 “it ranked 54th most visited website in the world.” The information on the site is generated by users (just like you and I) but IMDb does employ a “Database Content Team tasked with reviewing the community’s additions and revisions for accuracy.”

Outside of the “free” user generated section, IMDb also introduced, back in 2002, a subscription-based service called “IMDbPro” for the industry professionals (actors/crew and recruiters) to essentially act as a LinkedIn but for Hollywood – providing space for professionals to upload resume type information, headshots, etc. and casting agents could search the database for talent.

Back in 2016 apparently SAG pushed for regulation in California (which was enacted as Assembly Bill 1687) that arguably targeted IMDb, in effort to curtail alleged age discrimination in the entertainment industry. No doubt a legitimate concern (as it is in many industries) however, often good intentions result in bad law.

AB 1687 was signed into law, codified at Cal. Civ. Code § 1798.83.5 and included the following provision:

A commercial online entertainment employment service provider that enters into a contractual agreement to provide employment services to an individual for a subscription payment shall not, upon request by the subscriber, do either of the following: (1) [p]ublish or make public the subscriber’s date of birth or age information in an online profile of the subscriber [or] (2) [s]hare the subscriber’s date of birth or age information with any Internet Web sites for the purpose of publication.

Cal. Civ. Code § 1798.83.5(b)(1)-(2)

The statute also provides, in pertinent part:

A commercial online entertainment employment service provider subject to subdivision (b) shall, within five days, remove from public view in an online profile of the subscriber the subscriber’s date of birth and age information on any companion Internet Web sites under its control upon specific request by the subscriber naming the Internet Web sites.

Cal. Civ. Code § 1798.83.5(c)

The practical affect of these provisions is that it requires that subscribers of IMDbPro, be able to request that IMDb, and that IMDb, upon such request, remove the subscriber’s age or date of birth from the subscriber’s profile (which I would think they could do on their own to the extent they have control over such profile data) AND, more problematically, anywhere else on their website where such information exists regardless of who created that content. This is now extending to content the IMDbPro subscribers may not have control over as it may have been generated by third-party users of the site.

The Court opinion explained that “[b]efore AB 1687 took effect, IMDb filed a complaint under 42 U.S.C § 1983 in the Northern District of California to prevent its enforcement. IMDb alleged that AB 1687 violated both the First Amendment and Commerce Clause of the Constitution, as well as the Communications Decency Act, 47 U.S.C. § 230(f)(2).” While there was much back and forth between the parties, the crux of the debate, and crucial for the appeal was the debate over the language prohibiting IMDb’s ability to publish the age of information without regard to the source of the information.

When considering the statutory language restricting what could be posted the Court of Appeals concluded:

  • AB 1687 implemented content-based restriction on speech (i.e., dissemination of date of birth or age) that is subject to First Amendment scrutiny.
  • AB 1687 did not present a situation where reduced protection would apply (e.g., where the speech at issue is balanced against a social interest in order and morality).
    • IMDb’s content did not constitute Commercial Speech.
    • IMDb’s content did not facilitate illegal conduct.
    • IMDb’s content did not implicate privacy concerns.
  • AB 1687 does not survive strict scrutiny because it was not the least restrictive means to accomplish the goal and it wasn’t narrowly tailored.

In conclusion the Court articulated a position that I wholly agree with: “Unlawful age discrimination has no place in the entertainment industry, or any other industry. But not all statutory means of ending such discrimination are constitutional.”

Citation: IMDb.com, Inc. v. Becerra, Case Nos. 18-15463, 18-15469 (9th Cir. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

It’s hard to find caselaw to support your claims when you have none – Wilson v. Twitter

When the court’s opinion is barely over a page when printed, it’s a good sign that the underlying case had little to no merit.

This was a pro se lawsuit, filed against Twitter, because Twitter suspended at least three of Plaintiff’s accounts which were used to “insult gay, lesbian, bisexual, and transgender people for violating the company’s terms of service, specifically its rule against hateful conduct.”

Plaintiff sued Twitter alleging that “[Twitter] suspended his accounts based on his heterosexual and Christian expressions” in violation of the First Amendment, 42 U.S.C. § 1981, Title II of the Civil Rights Act of 1964, and for alleged “legal abuse.”

The court was quick to deny all of the claims explaining that:

  1. Plaintiff had no First Amendment claim against Twitter because Twitter was not a state actor; having to painfully explain that just because Twitter was a publicly traded company it doesn’t transform Twitter into a state actor.
  2. Plaintiff had no claim under § 1981 because he didn’t allege racial discrimination.
  3. Plaintiff’s Civil Rights claim failed because: (1) under Title II, only injunctive relief is available (not damages like Plaintiff wanted); (2) Section 230 of the Communications Decency Act bars his claim; and (3) because Title II does not prohibit discrimination on the basis of sex or sexual orientation (an no facts were asserted to support this claim).
  4. Plaintiff failed to allege any conduct by Twitter that cold plausibly amount to legal abuse.

The court noted that Plaintiff “expresses his difficulty in finding case law to support his claims.” Well, I guess it would be hard to find caselaw to support claims when you have no valid ones.

Citation: Wilson v. Twitter, Civil Action No. 3:20-0054 (S.D. W.Va. 2020)

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.

Breaking down the DOJ Section 230 Workshop: Stuck in the Middle With You

The current debate over Section 230 of the Communications Decency Act (47 U.S.C. § 230) (often referred to as “Section 230” or “CDA”) has many feeling a bit like the lyrics from Stealers Wheel – Stuck in The Middle With You, especially the lines where it says “clowns to the left of me, jokers to my right, here I am stuck in the middle with you.” As polarizing as the two extremes of the political spectrum seem to be these days, so are the arguments about Section 230.  Arguably the troubling debate is compounded by politicians who either don’t understand the law, or purposefully make misstatements about the law in attempt to further their own political agenda.

For those who may not be familiar with the Communications Decency Act, in brief, it is federal law enacted in 1996 that, with a few exceptions carved out within the statute, protects the owners of websites/search engines/applications (each often synonymously referred to as “platforms”) from liability from third-party content.  Platforms that allow third-party content are often referred to as user generated content (“UGC”) sites.  Facebook, Twitter, Snapchat, Reddit, TripAdvisor, and Yelp are all examples of such platforms and reasonable minds would likely agree that there is social utility behind each of these sites. That said, these household recognized platform “giants” aren’t the only platforms on the internet that have social utility and benefit from the CDA.  Indeed, it covers all of the smaller platforms, including bloggers or journalists who desire to allow people to comment about articles/content on their websites. 

So, what’s the debate over?  Essentially the difficult realities about humans and technology.  I doubt there would be argument over the statement that the Internet has come a long way since the early days of CompuServe, Prodigy and AOL. I also believe that there would be little argument that humans are flawed.  Greed was prevalent and atrocities were happening long before the advent of the Internet.  Similarly, technology isn’t perfect either.  If technology were perfect from the start, we wouldn’t ever need updates … version 1.0 would be perfect, all the time, every time.  That isn’t the world that we live in though … and that’s the root of the rub, so to speak.

Since the enactment of the CDA, an abundance of lawsuits have been initiated against platforms, the results of which further defined the breadth of the law.  For those really wanting to learn more and obtain a more historical perspective on how the CDA came to be, one could read Jeff Kosseff’s book called The Twenty Six Words That Created the Internet.  To help better understand some of the current debate over this law which will be discussed shortly, this may be a good opportunity to point out a few of the (generally speaking) practical implications of Section 230:

  1. Unless a platform wholly creates or materially contributes to content on its platform, it will not be held liable for the content created by a third-party.  This immunity from liability has also been extended to other tort theories of liability where it is ultimately found that such theory stems from the third-party content.
  2. The act of filtering content by a platform does not suddenly transform it into a “publisher” aka the person that created the content in the first place, for the purposes of imposing liability.
  3. A platform will not be liable for their decision to keep content up, or take content down, regardless of whether such information may be perceived as harmful (such as content alleged to be defamatory). 
  4. Injunctive relief (such as a take down order from a court) is legally ineffective against a platform if such order relates to content that they would have immunity for.

These four general principals are the result of litigation that ensued against platforms over the past 23+ years. However, a few fairly recent high-profile cases stemming from atrocities, and our current administration (from the President down), has put Section 230 in the crosshairs and desires for another amendment.  The question is, amendment for what?  One side says platforms censor too much, the other side says platforms censor too little, platforms and technology companies are being pressured to  implement stronger data privacy and security for their users worldwide while the U.S. government is complaining about measures being taken are too strong and therefore allegedly hindering their investigations.  Meanwhile the majority of the platforms are singing “stuck in the middle with you” trying to do the best they can for their users with the resources they have, which unless you’re “big Internet or big tech” is typically pretty limited.  And frankly, the Mark Zuckerberg’s of the world don’t speak for all platforms because not all platforms are like Facebook nor do they have the kind of resources that Facebook has.  When it comes to implementation of new rules and regulations, resources matter.

On January 19, 2020 the United States Department of Justice announced that they would be hosting a “Workshop on Section 230 of the Communications Decency Act” on February 19, 2020 in Washington, DC.  The title of the workshop “Section 230 – Nurturing Innovation or Fostering Unaccountability?”  The stated purpose of the event was to “[D]iscuss Section 230 … its expansive interpretation by the courts, its impact on the American people and business community, and whether improvements to the law should be made.”  The title of the workshop was intriguing because it seemed to suggest that the answer was one or the other when the two concepts are not mutually exclusive.

On February 11, 2020 the formal agenda for the workshop (the link to which has since been removed from the government’s website) was released.  The agenda outlined three separate discussion panels:

  • Panel 1:  Litigating Section 230 which was to discuss the history, evolution and current application of Section 230 in private litigation;
  • Panel 2: Addressing Illicit Activity Online which was to discuss whether Section 230 encourages or discourages platforms to address online harms, such as child exploitation, revenge porn, and terrorism, and its impact on law enforcement; and
  • Panel 3: Imagining the Alternative which was to discuss the implications on competition, investment, and speech of Section 230 and proposed changes. 

The panelists were made up of legal scholars, trade associations and a few outside counsel who represent plaintiffs or defendants.  More specifically, the panels were filled with many of the often empaneled Section 230 folks including legal scholars like Eric Goldman, Jeff Kosseff; Kate Klonik, Mary Ann Franks, and staunch anti- Section 230 attorney Carrie Goldberg, a victim’s rights attorney that specializes in sexual privacy violations.  Added to the mix was also Patrick Carome who is famous for his Section 230 litigation work, defending many major platforms and organizations like Twitter, Facebook, Google, Craigslist, AirBnB, Yahoo! and the Internet Association.  Other speakers included Annie McAdams, Benjamin Zipupsky, Doug Peterson, Matt Schruers, Yiota Souras, David Chavern, Neil Chilson, Pam Dixon, and Julie Samuels.

A review of the individual panelist’s bios would likely signal that the government didn’t want to include the actual stakeholders, i.e., representation from any platform’s in-house counsel or in-house policy.  While not discounting the value of the speakers scheduled to be on panel, one may find it odd that those who deal with the matters every day, who represent entities that would be the most impacted by modifications to Section 230, who would be in the best position to determine what is or is not feasible to implement in the terms of changes, if changes to Section 230 were to happen, had no seat at the discussion table.  This observation was wide spread … much discussion on social media about the lack of representation of the true “stakeholders” took place with many opining that it wasn’t likely to be a fair and balanced debate and that this was nothing more than an attempt by U.S. Attorney General William Barr to gather support for the bill relating to punishing platforms/tech companies for implementing end-to-end encryption.  One could opine that the Bill really has less to do with Section 230 and more to do with the Government wanting access to data that platforms may have on a few perpetrators who happen to be using a platform/tech service.

If you aren’t clear on what is being referenced above, it bears mentioning that there is a Bill titled “Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019” aka “EARN IT Act of 2019” that was proposed by Senator Lindsey Graham.  This bill came approximately two weeks after Apple was ordered by AG Barr to unlock and decrypt the Pensacola shooter’s iPhone.  When Apple responded that they couldn’t comply with the request, the government was not happy.  An article written by CATO Institute stated that “During a Senate Judiciary hearing on encryption in December Graham issued a warning to Facebook and Apple: ‘this time next year, if we haven’t found a way that you can live with, we will impose our will on you.’”  Given this information, and the agenda topics, the timing of the Section 230 workshop seemed a bit more than coincidence.  In fact, according to an article in Minnesota Lawyer, Professor Eric Goldman pointed out that the “DOJ is in a weird position to be convening a roundtable on a topic that isn’t in their wheelhouse.”

As odd as the whole thing may have seemed, I had the privilege of attending the Section 230 “Workshop”.  I say “workshop” because it was a straight lecture without the opportunity for there to be any meaningful Q&A dialog from the audience.  Speaking of the audience, of the people I had direct contact with, the audience consisted of reporters, internet/tech/first amendment attorneys, in-house counsel/representatives from platforms, industry association representatives, individual business representatives, and law students.  The conversations that I personally had, and personally overheard, was suggestive that the UGC platform industry (the real stakeholders) were all concerned or otherwise curious about what the government was trying to do to the law that shields platforms from liability for UGC.

PANEL OVERVIEW:

After sitting through nearly four hours’ worth of lecture, and even though I felt the discussion to be a bit more well-rounded than I anticipated, I still feel that the entire workshop could be summarized as follows: “humans are bad and do bad things; technology is a tool in which bad humans do bad things; technology/platforms need to find a way to solve the bad human problem or face liability for what bad humans occasionally do with the tools they create; we want to make changes to the law even though we have no empirical evidence to support the position that this is an epidemic rather than a minority…because bad people.”

Perhaps that is a bit of an oversimplification but honestly, if you watch the whole lecture, that’s what it boils down to.

The harms discussed during the different panels included:

  • Libel (brief mention)
  • Sex trafficking (Backpage.com, FOSTA, etc.)
  • Sexual exploitation of children (CSAM)
  • Revenge porn aka Non-Consensual Pornography aka Technology Facilitated Harassment
  • Sale of drugs online (brief mention)
  • Sale of alleged harmful products (brief mention)
  • Product liability theory as applied to platforms (ala Herrik v. Grindr)

PANEL #1:

In traditional fashion, the pro-Section 230 advocates explained the history of the CDA, how it is important to all platforms that allow UGC, not just “big tech” and resonated on the social utility of the Internet … platforms large and small.  However, the anti-Section 230 panelists pointed to mainly harms caused by platforms (though not elaborated on which ones) by not removing sexually related content (though defamation was a short mention in the beginning). 

Ms. Adams seemed to focus on sex trafficking – touching on how once Backpage.com was shut down that a similar close site started up in Amsterdam. She referred to the issues she was speaking about as a “public health crisis.” Of course, Ms. Goldberg raised argument relating to the prominent Herrik v Grindr case wherein she argued a product liability theory as a work around Section 230. That case ended when writ was denied by the U.S. Supreme Court in October of 2019. I’ve heard Ms. Goldberg speak on this case a few times and one thing she continually harps on is the fact that the Grindr didn’t have way to keep Mr. Herrik’s ex from using their website. She seems surprised by this. As someone who represents platforms, it makes perfect sense to me. We must not forget that people can create multiple user profiles, from multiple devices, from multiple IP addresses, around the world. Sorry, Plaintiff attorneys…the platforms’ crystal ball is in the shop on these issues … at least for now. Don’t misunderstand me. I believe Ms. Goldberg is fighting the good fight, and her struggle on behalf of her clients is real! I admire her work and no doubt she sees it with a lens from the trenches she is in. That said, we can’t lose sight of reality of how things actually work versus how we’d like them to work.

PANEL #2:

There was a clear plea from Ms. Franks and Ms. Souras for something to be done about sexual images, including those exploiting children.  I am 100% in agreement that while 46 states have enacted anti “revenge porn” or better termed Non-Consensual Pornography laws, such laws aren’t strong enough because of the malicious intent requirement.  All a perpetrator has to say is “I didn’t mean to harm victim, I did it for entertainment” or another seemingly benign purpose and poof – case closed.”  That struggle is difficult! 

No reasonable person thinks these kinds of things are okay yet there seemed to be an argument that platforms don’t do enough to police and report such content.  The question becomes why is that?  Lack of funding and resources would be my guess…either on the side of the platform OR, quite frankly, on a under-funded/under-resourced government or agency to actually appropriately handle what is reported.  What would be the sense of reporting unless you knew for sure that content was actionable for one, and that the agency it is being reported to would actually do anything about it?

Interestingly, Ms. Souras made the comment that after FOSTA no other sites (like Backpage.com) rose up.  Curiously, that directly contradicted Ms. Adams’s statement about the Amsterdam website popping up after Backpage.com was shut down.  So which is it?  Pro-FOSTA statements also directly contradicts what I’ve heard last October at a workshop put on by ASU’s Project Humanities entitled “Ethics and Intersectionality of the Sext Trade” which covered the complexities of sex trafficking and sex work.  Problems with FOSTA was raised during that workshop.  Quite frankly, I see all flowery statements about FOSTA as nothing more than trying to put lipstick on a pig; trying to make a well-intentioned, emotionally driven, law look like it is working when it isn’t.

Outside of the comments by Ms. Franks and Ms. Souras, AG Doug Peterson out of Nebraska did admit that the industry may self-regulate and sometimes that happens quickly, but he still complained that the state criminal law preemption makes his job more difficult and advocated for an amendment to include state and territory criminal law to the list of exemptions.  While that may sound moderate, the two can be different and arguably such amendment would be overbroad when you are only talking about sexual images.  Further, the inclusion of Mr. Peterson almost seemed as a plug in for a subtle push about how the government allegedly can’t do their job without modification to Section 230 – and I think a part of the was leaning towards, while not making a big mention about it, was the end-to-end encryption debate.  In rebuttal to this notion, Matt Schruers suggested that Section 230 doesn’t need to be amended but that the government needs more resources so they can do a better job with the existing laws, and encouraged tech to work to do better as they can – suggesting efforts from both sides would be helpful

One last important point made during this panel was Kate Klonik making the distinction between the big companies and other sites that are hosting non-consensual pornography.  It is important to keep in mind that different platforms have different economic incentives and that platforms are driven by economics.  I agree with Ms. Klonik that we are in a massive “norm setting” period where we are trying to figure out what to do with things and that we can’t look to tech to fix bad humans (although it can help).  Sometimes to have good things, we have to accept a little bad as the trade-off.

PANEL #3

This last panel was mostly a re-cap of the benefits of Section 230; the struggles that we fact when trying to regulate with a one-size fits all mentality and, I think most of the panelists seem to be agreeing that there needs to be some research done before we go making changes because we don’t want unintended consequences.  That is something I’ve been saying for a while and reiterated during the ABA’s Forum on Communications Law Digital Communications Committee hosted a free CLE titled “Summer School: Content Moderation 101” wherein Jeff Kosseff and I, in a moderated panel by Elisa D’Amico, Partner at K&L Gates, discussed Section 230 and a platform’s struggle with content moderation.  Out of this whole panel, the one speaker that had most people grumbling in the audience was David Chavern who is the President of News Media Alliance.  When speaking about solutions, Mr. Chavern likened Internet platforms to that of traditional media as if he was comparing two oranges and opined that platforms should be liable just like newspapers.  Perhaps he doesn’t understand the difference between first party content and third-party content.  The distinction between the two is huge and therefore I found his commentary to be the least relevant and helpful to the discussion. 

SUMMARY:

In summary, there seem to be a few emotion evoking ills in society (non-consensual pornography, exploitation of children, sex trafficking, physical attacks on victims, fraud, and the drug/opioid crisis) that the government is trying to find methods to solve.  That said, I don’t think amending Section 230 is the way to address that unless and until there is reliable and unbiased data that would suggest that the cure won’t be worse than the disease. Are the ills being discussed really prevalent, or do we just think they are because they are being pushed out through information channels on a 24-hour news/information cycle?

Indeed, reasonable minds would agree that we, as a society, should try and stop harms where we can, but we also have to stop regulating based upon emotions.  We saw that with FOSTA and arguably, it has made things more difficult on law enforcement, victims alike and has had unintended consequences, including chilling speech, on others.  You simply cannot regulate the hate out of the hearts and minds of humans and you cannot expect technology to solve such a problem either.  Nevertheless, that seems to be the position of many of the critics of Section 230.

For more reading and additional perspectives on the DOJ Section 230 Workshop, check out these additional links:

Disclaimer: This is for general information purposes only and none of this is meant to be legal advice and should not be relied upon as legal advice.