3Qs: Professors weigh in on SCOTUS decision, free speech on social media by Matthew McDonald June 3, 2015 Share Facebook LinkedIn Twitter A user logs on to Facebook (iStock image) On Monday, the Supreme Court for the first time addressed the implications of free speech on social media. In the case Elonis v. United States, the court reversed the conviction of a Pennsylvanian man accused of making threats on Facebook against his estranged wife. In the decision, the court held that while threatening communication—a “true threat”—can be prosecuted under the law, the prosecution did not do enough to prove Elonis’ intent to threaten. We asked three Northeastern faculty members—Michael Meltsner, Matthews Distinguished University Professor; Daniel Medwed, professor of law; and Brooke Foucault Welles, assistant professor in the Department of Communication Studies—to discuss the court’s decision and its potential implications on social media use and criminal law. The court decided in Elonis v. United States that prosecutors did not do enough to prove a “true threat.” The case was decided on statutory grounds rather than on First Amendment issues. What motivated the court’s decision and what are the broad legal findings? MELTSNER: It’s common for the court to avoid constitutional questions when it is able to construe a statute instead and that’s what happened in Elonis. What’s unusual about the decision is that while concluding conviction of this federal crime requires more than showing the defendant’s ugly and provocative language could reasonably be understood as a threat, the justices in the majority went out of their way to resist clarifying what sort of mental state is required. The most likely explanations for the chief justice’s narrow opinion is that the court is either split as to the proper test or unwilling to resolve matters without learning more from subsequent cases. Whatever the reason, prosecutors and lower court judges are for the time being left in the dark. Looking specifically at the social media aspect of the case, what are the criminal law implications for social media activity and for users of social media that result from this case? MEDWED: Elonis reaffirms a longstanding and critically important principle of criminal law: that a defendant generally must have a blameworthy mental state in order to be found guilty of a crime. A person who makes a statement on social media for the purpose of threatening someone else—or with the knowledge that it will be construed as a threat—may face federal criminal charges. But the mere fact that someone else on social media interprets the statement as a threat is not enough, in and of itself, to prove criminal conduct. The upshot is that Elonis may make it harder for federal prosecutors to charge social media users with making online threats, but not unreasonably so. The decision fits nicely in a rich tradition of cases holding that, as Justice Robert Jackson famously wrote more than 60 years ago in Morrissette v. United States, criminal culpability is the “concurrence of an evil-meaning mind with an evil-doing hand.” You’ve studied how social networks shape and constrain human behavior. In your opinion, does this decision open the door for increasing ‘trolling’ and harassment on social media channels? And if so, is there a solution to help social media users avoid such harassment? WELLES: From a socio-technical systems standpoint, one of the ongoing challenges with new media is that the laws rarely keep up with the communication capabilities that technologies enable. People use new media to communicate in new ways, and it is only well after the fact that we apply any legal boundaries on those behaviors. Monday’s Supreme Court ruling offered some insight into what does not constitute a threat on social media, and determined that the intent or perception of the message sender is the most important factor. That is, social media posts are only threats if the author intended for them to be threats. For me, this shifts too much power into the hands of the sender, and reproduces many of the structural inequalities we see in more traditional media. Content creators have all the power—and plausible deniability—leaving receivers little recourse when messages leave them feeling uncomfortable, marginalized, or unsafe. This sets up an ugly possibility for abuse, and seems to enable much of the trolling and harassment that is plaguing social media. If abusers simply have to claim they did not intend for a message to be a threat, it seems difficult to stem the deluge of threatening messages some people receive online. Of course, the Supreme Court ruling did not give any insight into what does constitute a threat on social media, so I expect there will be many more cases that tackle this issue, and hopefully set guidelines for when one message or a set of messages go too far, regardless of intent. That said, one of the great things about socio-technical systems is that we can work to use the architecture of the technology itself to discourage behaviors that we find ethically objectionable, even if they are legally allowed. As a stop-gap, I hope that the major social media companies continue to implement new features and algorithms to allow individual users to protect themselves from harassment, whether by blocking abusers, reporting them for violations of community standards, filtering their posts, or otherwise removing the pathways for communication between abusers and victims, regardless of legal intent.