Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

0
Postdoctoral Fellow, Stanford Internet Observatory (2021-2022)
Predoctoral Fellow, Stanford Internet Observatory (2020-2021)
Josh-Goldstein.jpeg

Josh A. Goldstein is a past postdoctoral scholar at the Stanford Internet Observatory. He received his PhD in International Relations from the University of Oxford, where he studied as a Clarendon Scholar. At the Stanford Internet Observatory, Dr. Goldstein investigated covert influence operations on social media platforms, studied the effects of foreign interference on democratic societies, and explored how emerging technologies will impact the future of propaganda campaigns. He has given briefings to the Department of Defense, the State Department, and senior technology journalists based on this work, and published in outlets including Brookings, Lawfare, and Foreign Policy.

Prior to joining SIO, Dr. Goldstein received an MPhil in International Relations at Oxford with distinction and a BA in Government from Harvard College, summa cum laude. He also assisted with research and writing related to international security at the Belfer Center, Brookings Institution, House Foreign Affairs Committee, and Department of Defense.

Authors
Jody Berger
Daphne Keller
News Type
Q&As
Date
Paragraphs

Daphne Keller leads the newly launched Program on Platform Regulation a program designed to offer lawmakers, academics, and civil society groups ground-breaking analysis and research to support wise governance of Internet platforms.

Q: Facebook, YouTube and Twitter rely on algorithms and artificial intelligence to provide services for their users. Could AI also help in protecting free speech and policing hate speech and disinformation?   

DK: Platforms increasingly rely on artificial intelligence and other algorithmic means to automate the process of assessing – and sometimes deleting – online speech. But tools like AI can’t really “understand” what we are saying, and automated tools for content moderation make mistakes all the time. We should worry about platforms’ reliance on automation, and worry even more about legal proposals that would make such automated filters mandatory. Constitutional and human rights law give us a legal framework to push back on such proposals, and to craft smarter rules about the use of AI. I wrote about these issues in this New York Times op ed and in some very wonky academic analysis in the Journal of European and International IP Law.

Q: Can you explain the potential impacts on citizens’ rights when the platforms have global reach but governments do not?

DK: On one hand, people worry about platforms displacing the legitimate power of democratic governments. On the other hand, platforms can actually expand state power in troubling ways. One way they do that is by enforcing a particular country’s speech rules everywhere else in the world. Historically that meant a net export of U.S. speech law and values, as American companies applied those rules to their global platforms. More recently, we’ve seen that trend reversed, with things like European and Indian courts requiring Facebook to take user posts down globally – even if the users’ speech would be legally protected in other countries. Governments can also use soft power, or economic leverage based on their control of access to lucrative markets, to convince platforms to “voluntarily” globally enforce that country’s preferred speech rules. That’s particularly troubling, since the state influence may be invisible to any given users whose rights are affected.   

There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it... We have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms.
Daphne Keller
Director of Program on Platform Regulation, Cyber Policy Center Lecturer, Stanford Law School

Q: Are there other ways that platforms can expand state power? 

DK: Yes, platforms can let states bypass democratic accountability and constitutional limits by using private platforms as proxies for their own agenda. States that want to engage in surveillance or censorship are constrained by the U.S. Constitution, and by human rights laws around the world. But platforms aren’t. If you’re a state and you want to do something that would violate the law if you did it yourself, it’s awfully tempting to coerce or persuade a platform to do it for you. This issue of platforms being proxies for other actors isn’t limited to governments – anyone with leverage over a platform, including business partners, can potentially play a hidden role like this.

I wrote about this complicated nexus of state and private power in Who Do You Sue? for the Hoover Institution.    

Q: What inspired you to create the Program on Platform Regulation at the Cyber Policy Center right now?

DK: There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it. At the Cyber Policy Center, there’s an amazing group of experts, like Marietje Schaake, Eileen Donahoe, Alex Stamos and Nate Persily, who are working on overlapping issues. We can address different aspects of the same issues and build on each other’s work to do much more together than we could individually.

The program really benefits from being at Stanford and in Silicon Valley because we have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms. 

The Cyber Policy Center is part of the Freeman Spogli Institute for International Studies at Stanford University.

All News button
1
Subtitle

Keller explains some of the issues currently surrounding platform regulation

-

Join the Cyber Policy Center on August 26th, at 10 a.m. pacific time, for a look how governments around the world are pushing to ban strong encryption. The talk will feature speakers Sam Woolley, Riana Pfefferkorn and Mathew Baum as they explore the different policy issues being used by governments to justify their agendas. This event is open to the public, but registration is required.

REGISTER


Matthew A. Baum (Ph.D., UC San Diego, 2000) is the Marvin Kalb Professor of Global Communications and Professor of Public Policy at Harvard University's John F. Kennedy School of Government and Department of Government. His research focuses on delineating the effects of domestic politics on international conflict and cooperation in general and American foreign policy in particular, as well as on the role of the mass media and public opinion in contemporary American politics. His research has appeared in over a dozen leading scholarly journals, such as the American Political Science ReviewAmerican Journal of Political Science, and the Journal of Politics. His books include Soft News Goes to War: Public Opinion and American Foreign Policy in the New Media Age (2003, Princeton University Press), War Stories: The Causes and Consequences of Public Views of War (2009, Princeton University Press, co-authored with Tim Groeling), and War and Democratic Constraint: How the Public Influences Foreign Policy (2015, Princeton University Press, co-authored with Phil Potter). He has also contributed op-ed articles to a variety of newspapers, magazines, and blog sites in the United States and abroad. Before coming to Harvard, Baum was an associate professor of political science and communication studies at UCLA. 

Riana Pfefferkorn is the Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society. Her work focuses on investigating and analyzing the U.S. government's policy and practices for forcing decryption and/or influencing crypto-related design of online platforms and services, devices, and products, both via technical means and through the courts and legislatures. Riana also researches the benefits and detriments of strong encryption on free expression, political engagement, economic development, and other public interests.

Dr. Samuel Woolley is a writer and researcher. He is an assistant professor in the School of Journalism and in the School of Information (by courtesy) at the University of Texas at Austin.  He is the program director of propaganda research at the Center for Media Engagement at UT. Woolley's work focuses on the ways in which emerging technology are leveraged for both democracy and control. He is the author of the book "The Reality Game: How the Next Wave of Technology Will Break the Truth" (PublicAffairs), an exploration of how tools from artificial intelligence to virtual reality are being used in efforts to manipulate public opinion and discusses what society can do to respond. He is the co-editor (with Dr. Philip N. Howard), of the book "Computational Propaganda" (Oxford University Press), a series of country-based case studies on social media and digital information operations. 

Paragraphs

Prime Minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, Ms. May called platforms like Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of 7 countries recently suggested the same thing. Germany wants to fine platforms up to 50 million euros if they don’t quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
Daphne Keller
Daphne Keller
Paragraphs

Daphne Keller's work focuses on platform regulation and Internet users' rights. She has testified before legislatures, courts, and regulatory bodies around the world, and published both academically and in popular press on topics including platform content moderation practices, constitutional and human rights law, copyright, data protection, and national courts' global takedown orders. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2020, Daphne was the Director of Intermediary Liability at Stanford's Center for Internet and Society. She also served until 2015 as Associate General Counsel for Google, where she had primary responsibility for the company’s search products. Daphne has taught Internet law at Stanford, Berkeley, and Duke law schools. She is a graduate of Yale Law School, Brown University, and Head Start.

Daphne blogs about platform regulation and Internet users' rights.

All Publications button
1
Publication Type
Blogs
Publication Date
Authors
Daphne Keller
Age Range
Secondary - Community College
Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs
Image
Daphne Keller Blog header

 

Alex Feerst, one of the great thinkers about Internet content moderation, has a revealing metaphor about the real-world work involved. “You might go into it thinking that online information flows are best managed by someone with the equivalent of a PhD in hydrology,” he says. “But you quickly discover that what you really need are plumbers.” The daily work of enforcing Terms of Service, or honoring legal takedown demands under laws like the Digital Millennium Copyright Act (DMCA), is all about the plumbing. If you don’t identify rules and operational logistics that function at scale, then you won’t accomplish what you set out to do. If you’re trying to enforce Terms of Service, you’ll get erratic and unfair outcomes. If you’re trying to enforce laws by making platforms liable for users’ unlawful posts, you’ll incentivize removal of lawful speech, encouraging platforms to appease anyone who merely claims online speech is illegal.

Until recently, despite the seemingly daily drumbeat of legislative proposals in this area, none of the lawmakers seemed to have talked to any plumbers. Bills like FOSTA and EARN IT proclaimed important goals, but did not lay out a system to actually achieve them. The original version of the EARN IT Act in particular failed in this regard. Its goal was incredibly important: to combat the scourge of child sexual exploitation online. But its mechanism for achieving that goal was basically a punt. EARN IT told platforms’ operational teams that they would be subjected to some rules eventually – but left those rules to be determined by an unaccountable body at some later date, after the law was already in place.  The more recent EARN IT draft, which passed out of committee on July 2, is, in a way, worse. It gives up on the idea of setting clear rules for platform content moderation operations at all. Instead, it exposes platforms to an unknown array of state laws under vague standards, to be interpreted by courts at some future date — leaving companies to guess how they need to redesign their services to avoid huge civil fines or criminal prosecution.

Against this backdrop, the “Platform Accountability and Consumer Transparency Act” (the PACT Act), sponsored by Sens. Brian Schatz (D-HI) and John Thune (R-SD), is a huge step forward. That’s not to say I love it (or endorse it). There are parts I strongly disagree with. Other parts build out ideas that might be workable, but it depends on details that are not resolved yet in the draft, or that just plain need fixing. Still: This is an intellectually serious effort to grapple with the operational challenges of content moderation at the enormous scale of the Internet. That in itself makes it remarkable in today’s DC environment. We should welcome PACT as a vehicle for serious, rational debate on these difficult issues.  

Focusing on operational logistics, as PACT does, is important. But of course, to achieve its legitimate ends, the law still has to get those logistics right. In that sense, PACT has a long way to go. The process-based rules that it sets forth need a lot more tire-kicking. They should get at least the level of careful review and negotiation that the DMCA did back in 1998: a lot of meetings, among a lot of stakeholders, putting in enough time to truly hash out questions of operational detail for those harmed by online content, users accused of breaking the law, and platforms. We seem less able to have careful discussion now than we were back then. But we should be much better equipped to do it. Today we have not only a broad cadre of academic and civil society experts in intermediary liability -- many of whom signed onto these Seven Principles on point -- but also an entire field of professionals who work on content moderation or on “trust and safety” more broadly. We have the plumbers! They know how things work! If PACT’s sponsors are serious about proposing the best possible law, they should bring them in to fix this thing.  

Big picture, PACT has (1) a set of rules for platform liability for unlawful content, (2) a set of consumer protection-based rules that mostly affect platforms’ voluntary moderation under Terms of Service, and (3) a few non-binding items.

1. Rules Changing CDA 230 Immunity and Exposing Platforms to Liability for Unlawful User Content

A. The Court Order Standard: Courts should decide what’s illegal, and once they do, platforms should honor those decisions.

If you want to walk back protections of CDA 230, taking away immunity for content that platforms know was deemed illegal by a court after a fair process is the low-hanging fruit. But this “court order standard” is no panacea. On the one hand, it is subject to abuse when frivolous claimants get default judgments, or just falsify court orders. On the other, some would argue that it creates too high a bar if it lets platforms leave up highly damaging material that we think they should be able to recognize as illegal without waiting for a court to act. (Though the worst know-it-when-you-see-it illegal content is proscribed by federal criminal law, for which Section 230 does not immunize platforms anyway.) For all its flaws, though, this standard would eliminate immunity in some of the most egregious cases currently covered by 230. It is the standard that many international human rights experts, civil society groups, legislatures, and courts have arrived at after wrangling for years with the problems created by fuzzier standards.

The PACT court order standard does have some problems. First, the way the bill is drafted makes it far too easy for subsequent amendments to eliminate the court order requirement entirely. A few wording changes in the bill’s Definitions section would leave the overall law with a very problematic notice and takedown model for anything that an accuser merely alleges is illegal. (More on that in the next section.) Second, the list of harms that can be addressed by court orders is weird. PACT allows court-order based takedowns under any federal criminal or civil law, or under state defamation law. That leaves out a lot of real harms that are primarily addressed by state law, like non-consensual sexual images (“revenge porn”). Meanwhile, no one I’ve talked to is even sure what the universe of federal civil claims looks like. It seems unlikely to correspond to the online content people are most concerned about. This focus on federal civil law crops up throughout PACT, and to be honest I don’t understand why. (If platforms were interpreting the law, under notice and takedown, then the problems with exposing them to all those divergent state laws would be obvious. But courts do the interpreting under the court order standard, so diversity of state laws is not the issue.) Having a list of the specific claims PACT would authorize would really help anyone who wants to understand the law’s likely impact.

B. The Procedural Notice and Takedown Model: The law should specify a process for accusers to notify platforms about allegedly (or in PACT’s case, court-adjudicated) illegal content, for platforms to respond, and for accused speakers to defend themselves.

This is intermediary liability 101.  We already have a decent (if flawed) model for legally choreographed notice and takedown in the DMCA. Frankly, it’s embarrassing that other US legislative proposals so far have not bothered to include things like “counter-notice” opportunities for speakers targeted by takedown notices, or penalties for bad-faith accusers (who are legion). To be clear, even with procedural protections like these, notice and takedown based on private individuals’ allegations, rather than court adjudication, would be seriously problematic for many kinds of speech. Imagine if anyone alleging defamation could make platforms silence a #metoo post or remove a link to news coverage criticizing a politician, for example. 

Hecklers will always send bogus or abusive notices, platforms will always have incentives to comply, and offering appeals to victimized speakers won’t be enough to offset the problem. But any kind of process is worlds better than amorphous standards like liability for “recklessness” or for content that platforms “should” have known about. No one knows what those standards mean in practice, and no one but the biggest incumbent platforms will want to assume the risk and expense of litigating to find out. This kind of legal uncertainty hurts smaller competitors more than bigger ones, in addition to threatening the speech rights of ordinary Internet users.  

PACT’s notice and takedown process isn’t perfect. Perhaps most troubling is the requirement to take down content within 24 hours of notification -- a standard much like the one recently deemed to unconstitutionally infringe users’ expression rights under French legal standards. The PACT Act has some carve-outs to this obligation, but the bottom line is a powerful new pressure to comply, even in cases of serious uncertainty about important speech or information. That pressure is, in my opinion, too blunt an instrument. There are more granular problems in PACT’s takedown process rules, too. For example, I initially read PACT to require these notices (like DMCA notices) to spell out the exact URL or location of the illegal content. That’s pretty standard in notice and takedown systems. But on review I realized the notice can be much clumsier and broader, merely identifying the accused’s account.   

C. Empowering Federal Agency Enforcers: Platforms are not immune from federal agency enforcement of federal laws or regulations.

I believe my summary captures the PACT drafters’ intent, but to be honest, I am not 100% sure what the words mean (and I’ve heard the same confusion from others). The bill says that platforms lose immunity against enforcement “by the Federal Government’’ of any “Federal criminal or civil statute, or any regulations of an Executive agency (as defined in section 105 of title 5, United States Code) or an establishment in the legislative or judicial branch of the Federal Government.’’ I think that means that HUD, EPA, FDA, CPSM, and others can bring enforcement actions. But maybe it also opens up exposure to civil claims broadly from DOJ? Here again, we could all understand this better if we had a list of the kinds of claims at stake. My suspicion is that there are not all that many areas where (a) agencies have relevant enforcement power, and also (b) 230 would even matter as a defense. (I don’t think 230 is necessarily a defense to important HUD housing discrimination claims, for starters.) But without a list of claims, it’s hard to say.

However odd this is in practice, I think I understand the theoretical justification. Making platforms take content down based on any accuser’s legal claim has obvious problems, but waiting for a court to decide is slow and can limit access to justice for victims of real legal violations. So it’s natural to look for a compromise approach, and empowering trusted government agencies is an obvious one. (This has been done or proposed in a lot of countries and is quite controversial in some – basically where the agencies are least trusted by civil society.) If agencies don’t have the authority to require content removal for First Amendment or Administrative Procedure Act reasons, but they do have power to bring a court case, this essentially puts the “heckler’s veto” power to threaten litigation in the hands of government lawyers instead of private individuals. In principle, that’s a reasonable place to put it, and we should be able to expect them to use their power wisely. In practice, the Supreme Court has repeatedly had to stop government lawyers from telling publishers and distributors what they can say, or enable others to say. That’s precisely what the First Amendment is about. This is also a strange time in American history to consider handing this power to federal agencies, given very serious concerns about their politicization, and the President’s recent efforts to use federal authority to shape platforms’ editorial policies for user speech. 

D. Empowering State Attorney General Enforcers: State attorneys general can bring federal civil claims against platforms (if their states have analogous laws, and the state AG consults with the federal one). 

This opens up claims under any federal civil law, not just the ones agencies enforce. (Maybe this implicitly means we should interpret the agency provision above as an authorization for AG Barr to bring any federal civil claim against platforms, too?) This again seems weird, because it’s not clear why federal civil law is the relevant body of law, or why empowering State AGs to enforce it solves our most pressing problems. That’s especially the case if they will enforce easily politicized or subjective rules, like FTC Act standards for “fairness” of TOS enforcement. 

State AGs have a tendency to push for enforcement of disparate state approaches, which raises obvious problems in governing the Internet. Some also have a pretty serious history of financially or politically motivated shenanigans, including taking sides in ongoing power struggles between corporate titans in the content, tech, and telecoms industries. One state AG, for example, literally sent Google a threatening letter extensively redlined by MPAA lawyers. Wired reports similar concerns about News Corp’s role in cultivating state AG attention to Facebook in 2007. Opening up more state forum shopping for these fights under PACT, and potentially subjecting platforms to conflicting back-room political pressures from red and blue state AGs, makes me pretty uneasy.

E. Ensuring That Platforms Are Not Required to Actively Police User Speech: PACT appears to preserve this essential protection for user rights, but needs clarification.

Requiring platforms to proactively monitor their users’ communications is the third rail of intermediary liability law. In Europe, it has been the center of the biggest fights in recent years. In the U.S., making platforms actively review everything users say in search of legal violations could raise major issues under both the 1st and 4th Amendments. So far, U.S. law has steered far clear of this -- federal statutes on both copyright and child sexual abuse material, for example, expressly disclaim any monitoring requirements for intermediaries.

PACT appears to hew to this important principle with a standard immunity-is-not-conditioned-on-monitoring provision. But… it’s not entirely clear if this passage actually does the job. That’s especially the case given some troublingly fuzzy language about requiring platforms to not just take down content but also “stop… illegal activity” by users. It’s not clear what that language means, short of dispatching platforms to police users’ posts or carry out prior restraints on their speech. Getting this language buttoned up tighter will be critical if the bill moves forward. (This one is really an issue for the legal/Constitutional nerds, not the content moderation operations specialists.)

F. Regulating Consumer-Facing Edge Platforms, Not Internet Infrastructure: PACT has some limits, but needs more.  

Whatever we think the right legal obligations are for the Facebooks and YouTubes of the world, those probably are not the right obligations for Internet infrastructure providers. Companies that offer Internet access, routing, domain name resolution, content delivery networks, payment processing, and other technical or functional processing in the deeper layers of the Internet simply don’t work the same way. For one thing, they are blunt instruments. Many of them have no ability to take down just the image, post, or page that violates a law -- they can only shut off an entire website, service, or app.  

PACT takes a step toward carving these providers out of its scope, but it doesn’t go far enough. (It only carves them out to the extent that they are providing service to another 230-immunized entity.) This shouldn’t be hard to fix.

2. Rules Regarding Platforms’ Voluntary Measures to Enforce Terms of Service 

A. The Consumer Protection Model: If platforms are going to enforce private speech rules under their Terms of Service, they should state the rules clearly and enforce them consistently. Failure to do so is a consumer protection harm, like a bait-and-switch or a failure to label food correctly.

[Update July 17: Staff involved in the bill tell me that the intent of this section is for the FTC to enforce failures of process in TOS enforcement (like failure to offer appeals, publish transparency reports, etc.), not for the FTC to determine which substantive outcomes are correct under the platform's TOS. That's a meaningful difference and would mitigate some of the First Amendment concerns I mention below. Clarification in the text about this (especially re the FTC assessing "appropriate" steps by the platform) would help, since most people I consulted with about the bill and this post seemed to read (or mis-read) the provision the same way I did.]

Consumer protection provides a useful framing, and one that critics on the left and right can often agree on. European regulators used consumer protection law to reach agreements about TOS enforcement with platforms in 2018, and ideas along these lines keep coming up inside and outside the United States. At a (very) high level, I like the idea that platforms should have to make clear commitments to users, and uphold them.  But there are very real operational and constitutional issues to be resolved. 

As a practical matter, I have a lot of questions. Exactly how much detail can we reasonably demand from platforms explaining their rules – is it enough if they provide the same level of detail as state legal codes, for example? Are they supposed to notify users every time some new form of Internet misbehavior crops up and prompts them to update the rules (which happens all the time)? How far do we want to go in having courts or regulators second-guess platforms in hard judgment calls? Speaking of those courts or regulators, how would we possibly staff them for the inevitable deluge of disputes? (I think PACT’s answer is that the FTC brings the occasional enforcement action but isn’t required to handle any particular complaint.)

Then there are the constitutional issues. If a regulator rejects the platform’s interpretation of its own rules in a hard case, is it essentially overriding platform editorial policy, and does that violate platforms’ First Amendment rights? Is the government essentially picking winners and losers among lawful user posts, and does that violate the users’ First Amendment rights? Even without activist court or agency interpretations, is it a problem generally to use consumer protection law to restrict what are essentially editorial choices? (This really isn’t the same as food labeling, for example. As the Supreme Court explained in rejecting a similar analogy years ago, “[t]here is no specific constitutional inhibition against making the distributors of food the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller.”) 

For all my sympathy for this approach, many of the things that trouble me most about PACT are in these sections. This is one place where the bill could most benefit from careful review by the plumbers I mentioned. I am not an operations specialist by Silicon Valley standards, and I did not try to vet every aspect of this proposal. But even I can see a lot of issues.  The bill requires many platforms to offer call centers, for example, not just the web forms and online communications typically used today. A noted trust and safety expert, Dave Willner, said in a panel I attended that this would lead to worse and slower outcomes for most users trying to solve real problems. He concluded that “you’d be better off taking the cash this would cost and burning it for heat.” (That’s a paraphrase. In spirit, it is the same thing I’ve heard from a lot of people.)  PACT also requires 14-day turnaround time for responding to notifications, which sounds good in theory but in practice may be truly difficult for small platforms facing hard judgment calls or sudden increases in traffic, notifications, or abuse. Even for larger platforms, a standard like this could force them to prioritize recent notices at the expense of ones that may be more serious (identifying more harmful content) or accurate (coming from a source with a good track record).  

B. Transparency Reports: Making platforms (or platforms over a certain size) publish aggregate data about content moderation, in a format that permits meaningful comparison between platforms.

I am a huge fan of transparency. Without it, we will stay where we are now: people on all sides of the platform content moderation issues will keep slinging anecdotes like mud. The ones with more power and media access will get more attention to their anecdotes. That’s a terrible basis for lawmaking. If lawmakers have the power to get the facts first and legislate second, we will all be better off.

That said, tracking more detail about the grounds for every content demotion or removal; what kind of entity sent the notice; what rule it violated for what reason; the role of automation; whether there was an appeal; etc. all add up to a fair amount of work – especially for smaller companies. They can’t track everything. The challenge is orders of magnitude bigger for transparency about what PACT calls “deprioritization” of content. Google, for example, adjusts its algorithm some 500-600 times per year, affecting literally trillions of possible search outcomes. It’s not clear what meaningful and useful transparency about that even looks like. Those of us who advocate for transparency should be smart about what precise information we ask for, so we get the optimal bang for our societal buck (and so we don’t fail to ask for something that will later turn out to matter a lot). Right now, although people like me put out lists of possible asks, and researchers who have been trying and too often failing to get important information from platforms often have very specific critiques of current transparency measures, we don't really have an informed consensus on what the priorities should be. So here, too, I say: bring on the plumbers, including both content moderation professionals and outside researchers.   

There’s a constitutional question here, too, though -- and it might be a really big one. Could transparency requirements be unconstitutional compelled speech (as the 4th Circuit recently found campaign ad transparency requirements for hosts were in Washington Post v. McManus)? Would they be like making the New York Times justify every decision to accept or reject a letter to the editor, or a wedding announcement? I haven't tried to answer that question. But we'll need to if this or other transparency legislation efforts move forward.

3. Non-Binding (I hope) Items

A. Considering future whistleblower protections:  The Government Accountability Office is directed to issue a report on the idea of protections and awards for platform employees who disclose “violations of consumer protection,” meaning improper TOS enforcement in content moderation.

I love whistleblowers. I even represented them at one point. But I shudder to think how politically loaded this particular kind of “whistleblowing” will be. The idea that a dissatisfied employee can bring ideologically grounded charges to whichever FTC Commission offices are staffed by members of his or her political party makes me want to hide in a cave. As one former platform employee told me, “bounties for selective leaking of stylized evidence against teammates is an episode of Black Mirror I’d be too scared to watch." This scenario is enough to make me wonder if those First Amendment concerns I touched on above – the ones about a government agency looking for opportunities to effectively dictate platform editorial policy in the guise of interpreting the platform’s own rules – are actually a really, really big deal. But… in any case, this whistleblower provision isn’t anything mandatory, for now.

B. A voluntary standards framework. The National Institute of Standards and Technology is directed to convene experts and issue non-binding guidelines on topics like information-sharing and use of automation.

This shares a suspicious resemblance with the “best practices” in EARN IT. Those were nominally not required in that bill’s original draft (but were in fact a prerequisite for preserving immunity). They are even more not required in EARN IT’s current draft (unless they become de facto standards for liability under the raft of state laws EARN IT unleashes). Perhaps I am naïve, but I am less worried about the voluntary standards proposed in PACT. For one thing, they won’t be crafted by nominees put in place by a who’s-who of DC heavy-hitters, as EARN IT’s would be. And the specific topics listed in the PACT Act – like developing technical standards to authenticate court orders – don’t all look like hooks for liability, like the ones in EARN IT. Most importantly, though, because PACT does not open platforms up to a flood of individual allegations under vague state laws, it leaves fewer legal blanks to be filled in by things like “voluntary” best practices or standards. Of course, the NIST standards might still come into play under PACT for courts assessing agency or AG enforcement of federal laws (or for platforms deciding whether to do what courts and AGs demand behind closed doors, to avoid going to court). So I may come to regret my optimism in calling them non-binding.   

Conclusion

I can’t tell you what to think about PACT. That’s in part because I am still trying to understand some of its key provisions. (What are these federal civil laws it talks about? Will DOJ be enforcing them? What are the logistics and First Amendment ramifications of its FTC consumer protection model for TOS-based content moderation?) But it’s also in part because its core ideas are things where reasonable minds might differ. It’s not disingenuous nonsense, and it’s not a list of words that sound plausible on paper but that legal experts know are meaningless or worse. It’s a list of serious ideas, imperfectly executed. If you like any of them, you should be rooting for lawmakers to do the work to figure out how to refine them into something more operationally feasible. You should be calling on lawmakers to bring in the plumbers.   

All News button
1
Paragraphs

Concluding Chapter of Social Media and Democracy: The State of the Field and Prospects for Reform (Cambridge Press, forthcoming September 2020)

Nathaniel Persily and Joshua A. Tucker

To some extent, it has been the best of times and the worst of times when it comes to social media research. As the first half of this book reveals, we are beginning to gain important insights into the dynamics of the communication revolution underway. However, despite these achievements and the widely recognized importance of this research, unique constraints have hindered the necessary concerted academic effort to answer the most important empirical questions. The key social media datasets to answer these important questions are not as readily available as were politically relevant datasets of years past. Moreover, unique legal barriers prevent analysis of such data, and related ethical and privacy concerns have arisen that have chilled academic inquiry...

For the full chapter, download below:

All Publications button
1
Publication Type
Books
Publication Date
Authors
Joshua Tucker
-

Image
Avi Tuschman, Adam Berinsky, David Rand

Please join the Cyber Policy Center for Exploring Potential “Solutions” to Online Disinformation​, hosted by Cyber Policy Center's Kelly Born, with guests Adam Berinsky, Mitsui Professor of Political Science at MIT and Director of the MIT Political Experiments Research Lab (PERL) at MIT, David Rand, Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, and Director of the Human Cooperation Laboratory and the Applied Cooperation Team at MIT, and Avi Tuschman, Founder & CIO, Pinpoint Predictive. The session is open but registraton is required.

Adam Berinsky is the Mitsui Professor of Political Science at MIT and serves as the director of the MIT Political Experiments Research Lab (PERL). He is also a Faculty Affiliate at the Institute for Data, Systems, and Society (IDSS). Berinsky received his PhD from the University of Michigan in 2000. He is the author of "In Time of War: Understanding American Public Opinion from World War II to Iraq" (University of Chicago Press, 2009). He is also the author of "Silent Voices: Public Opinion and Political Participation in America" (Princeton University Press, 2004) and has published articles in many journals. He is currently the co-editor of the Chicago Studies in American Politics book series at the University of Chicago Press. He is also the recipient of multiple grants from the National Science Foundation and was a fellow at the Center for Advanced Study in the Behavioral Sciences.

David Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. Bridging the fields of behavioral economics and psychology, David’s research combines mathematical/computational models with human behavioral experiments and online/field studies to understand human behavior. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, perceived accuracy of false or misleading news stories, political preferences, and the dynamics of social media platform behavior. 

Avi Tuschman is a Stanford StartX entrepreneur and founder of Pinpoint Predictive, where he currently serves as Chief Innovation Officer and Board Director. He’s spent the past five years developing the first Psychometric AI-powered data-enrichment platform, which ranks 260 million individuals for performance marketing and risk management applications. Tuschman is an expert on the science of heritable psychometric traits. His book and research on human political orientation have been covered in peer-reviewed and mainstream media from 25 countries. Previous to his career in tech, he advised current and former heads of state as well as multilateral development banks in the Western Hemisphere. Tuschman completed his undergraduate and doctoral degrees in evolutionary anthropology at Stanford.

News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

-

Image
The rise of digital authoritarianism banner advertisement

There will be four events, with the first on September 29th; all dates listed below

REGISTER 

  • September 29th, 9-11am PST
  • October 1st, 9-11am PST
  • October 6th, 9-11am PST
  • October 9th, 9-11am PST

 

 

The Rise of Digital Authoritarianism: China, AI and Human Rights

Day 1- September 29, 2020

Welcome Remarks

Larry Diamond | Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

Glenn Tiffert | Research Fellow, Hoover Institution

Jenny Wang | Strategic Advisor, Human Rights Foundation

Opening Remarks

Condoleezza Rice | Director, Hoover Institution, Former U.S. Secretary of State, Denning Professor in Global Business at the Graduate School of Business

 

Panel 1: How AI is powering China's Domestic Surveillance State - How is AI exacerbating surveillance risks and enabling digital authoritarianism? This session will examine both state-sponsored applications and Chinese commercial services.

Panelists

Bethany Allen-Ebrahimian | China Reporter, Axios

Paul Mozur | Asia Technology Correspondent, New York Times

Glenn Tiffert | Research Fellow, Hoover Institution

Xiao Qiang | UC Berkeley & Editor-in-Chief, China Digital Times

Moderator

Melissa Chan | Foreign Affairs Reporter, Deutsche Welle Asia

 

Day 2- October 1, 2020

Panel 2: The Ethics of Doing Business with China and Chinese Companies

Eric Schmidt | Former Executive Chairman and CEO, Google//Co-Founder, Schmidt Futures
Conversant: Eileen Donahoe, Executive Director of GDPI

 

Panel 2: The Ethics of Doing Business with China and Chinese Companies - What dynamics are at play in China's effort to establish market dominance for Chinese companies, both domestically and globally? What demands are placed on non-Chinese technology companies to participateWhat dynamics are at play in China's effort to establish market dominance for Chinese companies, both domestically and globally? What demands are placed on non-Chinese technology companies to participate in the Chinese marketplace? What framework should U.S.-based companies use to evaluate the risks and opportunities for collaboration and market entry in China? To what extent are Chinese companies (e.g..,TikTok) competing in Western markets required to comply with Chinese government instructions or demands for access to data?

Panelists

Mary Hui | Hong Kong-based Technology and Business Reporter, Quartz
 
Megha Rajagopalan | International Correspondent and Former China Bureau Chief, Buzzfeed News
 

Alex Stamos | Director, Stanford Internet Observatory & Former Chief Security Officer, Facebook

Moderator

Casey Newton | Silicon Valley Editor, The Verge

 

Day 3- October 6, 2020

Panel 3: China as an Emerging Global AI Superpower

Keynote & Conversation

Competing in the Superpower Marathon with China

Mike Brown | Director, Defense Innovation Unit

Conversant: Larry Diamond, Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

Panel 3: China as an Emerging Global AI Superpower- How should we think about China's growing influence in the realm of AI and the attendant geopolitical risks and implications? This session will explore China’s bid through Huawei to build and control the world's 5G networks, and what that implies for human rights and national sovereignty and security; China's export of surveillance technology to authoritarian regimes around the world; China's global partnerships to research and develop AI; and the problem of illicit technology transfer/theft.

Panelists

Steven Feldstein | Senior Fellow, Carnegie Endowment for International Peace 

Lindsay Gorman | Fellow for Emerging Technologies, Alliance for Securing Democracy, German Marshall Fund 

Maya Wang | China Senior Researcher, Human Rights Watch

Moderator

Dominic Ziegler | Senior Asia Correspondent and Banyan Columnist, The Economist

 

Day 4- October 9, 2020

Panel 4: How Democracies Should Respond to China’s Emergence as an AI Superpower

Keynote

Digital Social Innovation: Taiwan Can Help

Audrey Tang | Digital Minister, Taiwan

Panel 4: How Democracies Should Respond to China's Emergence as an AI Superpower- How should the rest of the world, and especially the world's democracies, react to China's bid to harness AI for ill as well as good? How do we strike the right balance between vigilance in defense of human rights and national security and xenophobic overreaction?

Panelists

Christopher Balding | Associate Professor, Fulbright University Vietnam

Anja Manuel | Co-Founder, Rice, Hadley, Gates & Manuel

Chris Meserole | Deputy Director of the Artificial Intelligence and Emerging Technology Initiative, Brookings Institution

Moderator

Larry Diamond | Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

 

Closing Keynote & Conversation

Strengthening Human-Centered Artificial Intelligence

Fei-Fei Li | Co-Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI) Conversant: Eileen Donahoe, Executive Director of GDPi

Closing Remarks: Alex Gladstein & Eileen Donahoe

Seminars
Subscribe to Security