UPDATE

AS OF JANUARY 1, 2013 - POSTING ON THIS BLOG WILL NO LONGER BE 'DAILY'. SWITCHING TO 'OCCASIONAL' POSTING.

Showing posts with label legal. Show all posts
Showing posts with label legal. Show all posts

Monday, December 03, 2012

FREE SPEECH: FROM LEGAL TO LETHAL

by Lori Andrews

Recently, Judge Roger Titus of Maryland declared unconstitutional a federal law that made it a crime to use the internet "with the intent to harass [or] cause substantial emotional distress to a person in another state." In the wake of that decision, legislatures and courts across the country will need to rethink existing statutes on cyberharassment.

In the Maryland case, William Cassidy had been charged with cyberstalking Alyce Zeoli, a former colleague and a Buddhist religious leader, based on his tweets, such as, "Do the world a favor and go kill yourself. P.S. Have a nice day." Zeoli asserted that the tweets made her so fear for her safety that she had not left her house for a year and a half, except to see her psychiatrist. But the judge dismissed the case. His reasoning will animate discussions in legislatures about how to amend state and federal laws.

Judge Titus indicated that "threats of harm" are punishable, but not communications "intending" emotional distress. He also considered it relevant that the medium used (a tweet) was communicated to the public at large rather than just the victim. The target of the harassment could just choose not to follow the tweets. "This," said the judge, "is in sharp contrast to a telephone call, letter or e-mail specifically addressed to and directed at another person, and that difference ... is fundamental to the First Amendment analysis in this case." Judge Titus also seemed to think that it was unreasonable for Zeoli to have such a dramatic reaction; he said that Cassidy's tweets were not a "true threat."

Words are powerful. They can move listeners or readers to action, sometimes even to harm themselves or someone else. But generally, our society doesn't punish the speaker or writer. Think about Ozzy Osbourne. Thirty years ago, he recorded the song Suicide Solution. The song states that "Suicide is the only way out," and contains the barely-recognizable lyrics, sung at a faster speed, "Get the gun and try it; Shoot, shoot, shoot."

When a 19 year old shot himself in the head with a .22 caliber handgun after spending five hours listening to Ozzy's music, his grieving parents sued Ozzy and the record distributor. The California Appellate Court rejected their claims (pdf), noting that speech does not lose its First Amendment protection merely because it "may evoke a mood of depression." The court said the lyrics failed to "order or command anyone to concrete action at any specific time."

But courts have held differently when the speech is directly addressed to a particular person. In a case currently on appeal in Minnesota, William Melchert-Dinkel was charged with pressuring two people over the internet to commit suicide. He posed as a young female nurse who pretended to enter into a suicide pact with his victims. The judge in the Minnesota case pointed out that Melchert-Dinkel's "encouragement and advice imminently incited the suicide of Nadia Kajouji and was likely to have that effect." The judge in the case labeled the instant messages as "lethal advocacy" and held that Melchert-Dinkel's words were "analogous to the category of unprotected speech known as 'fighting words' and 'imminent incitement of lawlessness.'" The judge distinguished messages sent to the public at the large, saying that Melchert-Dinkel had the right to take his pro-suicide message to the public--over the internet, on television, and so forth--but did not have the right to address that message to a single, vulnerable individual. Melchert-Dinkel's attorney is appealing the case, based on the First Amendment.

But how direct does a threat have to be? What if Melchert-Dinkel had just sent Nadia an mp3 file with Ozzy’s Suicide Solution? Courts are already weighing whether people's "likes" on a social network can be used as evidence against them. In a Wisconsin case, a judge admitted into evidence a litigant's MySpace reference to a short story in which a judge was harmed. In contrast, a Mississippi court refused to use a dad's MySpace post of Ronald McDonald being shot in the face to prove that the mom should get custody of the kids.

The "intent" standard is also problematic. As Judge Titus suggested, the standard is too broad, covering speech that is constitutionally protected. But the "intent" standard is also, in some cases, too narrow. It might allow someone to evade legitimate prosecution by claiming they didn't intend harm, they just intended to be funny.

That strategy worked for 40-year-old Elizabeth Thrasher, whose victim was her ex-husband’s new girlfriend’s daughter. Thrasher posted photos, the phone number, and the email address of the 17-year-old girl in the "Casual Encounters" section of Craigslist, in which people expressed their interest in casual sex. As a result of the "Casual Encounters" posting, the 17-year-old girl was swamped with sexually explicit cell phone calls, emails, and text messages that included nude pictures and solicitations for sex. One man even came to the Sonic Restaurant where she worked after failing to reach her on her phone, leading her to eventually quit her job out of fear. She testified that the publication of the information made her feel like she "was set up to get killed and raped by somebody." Thrasher’s attorney argued that photos of the girl and her work location were already available on the girl's MySpace profile. He said the postings were "tantamount to a practical joke"--and Thrasher was acquitted.

The issue of targeting a victim versus the public at large is also a question to be considered. Judge Titus suggests that, to be criminally actionable, tweets and posts need to be sent directly to the victim. But because of the nature of digital communications, Judge Titus' distinction between public tweets and direct communications with the victim may not hold up in future cases. Much of the cyberharassment of women does not involve a direct threat from one person to another. In a Connecticut case, a man posted a YouTube rap video of himself waving a gun while threatening to shoot his baby's mom and "put her face on the dirt until she can't breathe no more." Even though the man was in North Carolina at the time and the woman resided in Connecticut, the court issued a restraining order against him.

Under Judge Titus' standard, such a video might not have been a cause for concern because the woman could just have turned it off. This issue of targeting a private person versus the public at large will be key in cases where the person posts information (such as a Google map to a woman's house with a claim she wants men to act out rape fantasies) and the poster himself does not intend to do violence.

As courts and legislators deal with cyberharassment, they'll be determining what the limits are to punishing people for tweets and posts that threaten violence or cause emotional harm. They'll also have to determine whether the rule that the communication must be sent directly to the victim makes any sense in the age of Twitter, Facebook, and YouTube, where public posts--especially those that urge someone else to harm the victim--might be even more deadly than private ones.

Lori Andrews is the author of the upcoming I Know Who You Are and I Saw What You Did: Social Networks and the Death of Privacy

Wednesday, August 22, 2012

INTERNET PROVIDERS & REVENGE BOARDS MAY BE LIABLE FOR PREDATORS & HARASSERS

by Jonathan Bick

The economic difficulty of pursuing individuals for bad acts has led injured parties to seek legal remedies from the companies that facilitate the platform upon which the bad acts occur. In the past, internet facilitators could avoid contributory and vicarious liability by claiming users' bad acts were beyond the facilitator's ken and control. Now, widely available, low cost e-commerce technology diminishes the viability of said defenses.

Previously, passive internet service facilitators successfully argued that they do not "collaborate" with internet users to undertake bad acts because they were either unaware of the bad acts or could not act to prevent such bad acts in a timely fashion. Advances in internet technology, however, have increased the internet facilitator's capacity for ameliorating internet bad acts automatically. Failure to employ such technology may result in an increase in the facilitator's liability for not preventing bad acts on the internet.

Internet facilitators include service providers, hosting services, blogging platforms, 'gripe' sites and social network sites, to name just a few. These internet service suppliers allow email, instant messaging, peer-to-peer communications, blogs, broad internet access, chat rooms, intranets, interactive websites, and other electronic communications. They also allow various goods and services transactions.

These transactions may result in a myriad of bad internet acts, ranging from defamation, copyright infringement, failure to protect trade secrets, harassment (including hostile work-environment issues), to criminal accountability and loss of attorney-client privilege.

The nature and extent of internet bad acts is exacerbated by the fact that internet sites are accessible beyond national borders, and no international code of internet behavior exists. Additionally, user-generated content may be a substantial portion of an internet facilitator's site content and the international legal community has yet to standardize intellectual property rights; international intellectual property standards are governed by multilateral treaties.

In the past, internet facilitators could avoid secondary liability for not stopping bad acts by showing one of two types of defenses. First, if charged with vicarious liability, facilitators could show that they did not possess the ability to supervise those who engaged in bad acts using the facilitator's Internet assets. Second, if charged with contributory liability, they could show they did not have knowledge of the bad act involving the facilitator's internet assets. See MGM v. Grokster, 545 U.S. 913 (2005).

However, as internet technology increasingly allowed automated action to enable internet facilitators to prevent bad acts by third parties on their sites, the United States implemented a statute that provided a "safe harbor" provision protecting websites and web providers from secondary liability for certain bad acts, such as copyright violations performed by users on a facilitator's internet asset. The most wide-ranging safe-harbor provision is offered by the Digital Millennium Copyright Act of 2008, Pub. L. No. 105-304, 112 Stat. 2860 (codified at 17 § U.S.C. 101 et seq.) (DMCA).

Though the question of interpreting this part of the statute has yet to reach the Supreme Court, lower courts have been consistent in interpreting it broadly and have applied it to any entity that provides access to the internet. In particular, the court in ALS Scan, Inc. v. RemarQ Cmtys., Inc., 239 F.3d 619, 626 (4th Cir. 2001), found that a newsgroup website would fall under the definition of an internet facilitator. The court in Corbis Corp. v. Amazon.com, Inc., 351 F.Supp. 2d 1090, 1100 (W.D. Wash. 2004), found that Amazon.com fits within the definition as well.

However, the safe harbor also requires that the internet facilitator who is eligible for indemnification from secondary liability not have "actual knowledge" of the infringing material. The near universal use of internet technology, which provides actual knowledge of the content of the facilitator's site and the site's related transactions, may be used by plaintiffs to pierce the safe-harbor provision and require the internet facilitator to forfeit the protections of the safe harbor.

Internet technology that allows a facilitator to limit an internet user's bad acts is available. The three most important technologies are: automatic internet user monitoring systems, "net nannies," and internet tracking software.

Automatic internet user monitoring systems, such as screen capture utilities and key logger software, record all information that is sent to an internet facilitator's site. These monitoring systems can feed captured data to software tools which will prevent internet users from taking certain action to facilitate bad acts, such as installing malware and distributing unlawful spam, among other activities.

For more than 10 years net-nanny software has been providing internet facilitators with a secure means to web filter to avoid the use of its site for purposes deemed inappropriate. Net nannies may be used to stop the distribution of images of an unlawful nature, deny access to internet users whom the internet facilitator deems to be undesirable, and generally censor unacceptable behavior automatically on behalf of the internet facilitator.

Existing internet user-tracking software can usually narrow the radius of geographical location of an internet user within several hundred feet, without requiring the user's permission. This is done by sending a message to the target, and using the time it takes to bounce back, the internet user's IP address and Google Map software. Knowing the likely geographic location of an internet user can allow the internet facilitator to prevent internet bad acts, such as allowing a site user to send goods into a state which has deemed such goods to be contraband.

In combination, automatic internet user monitoring systems, net nannies, and internet tracking software are capable of removing unlawful or unacceptable content and sending an electronic message to the bad actor informing that person of the violation that has been committed. Internet technology may also mete out sanctions automatically. In particular, certain internet technology may automatically bar a bad actor's access after determining that a violation of the terms of use agreement associated with the internet facilitator's sites has occurred.


While changes in internet technology may change internet facilitators' liability in the United States, such changes may be blunted in Europe due to the implementation of local law. The European Union has attempted to deal with the liability of internet facilitators by issuing a series of directives.

These directives are known as the E-Commerce Directive, and it grants liability exemptions to passive internet facilitators. See Directive 2000/31/EC, arts. 40-58, 2000 O.J. (L 178) 1 (EC). The E-Commerce Directive exemptions only apply if the internet facilitators do not "collaborate" with a user to undertake illegal acts and must act expeditiously to remove access to any illegal information upon receiving notice of such illegal activities.

While the directive is binding on member states as to the effect to be achieved, it allows the implementation process to be designed by each member state for implementation in its sovereign jurisdiction. The directive does not address internet technology, thus the use or failure to use such technology is not a factor in assessing internet facilitator liability.

Even if the use of monitoring and control technology were integrated into the E-Commerce Directive, the result is not clear, as evidenced by the three cases considering YouTube's liability for user copyright infringement that parallel Viacom International Inc. v. YouTube, Inc., in Spain, Germany, and Italy.

All three countries are members of the European Union and thus subject to the E-Commerce Directive. Yet the cases have resulted in a YouTube victory in Spain, but losses for YouTube in Germany and Italy.