-

Image
election debrief event stanford

The US 2020 elections have been fraught with challenges, including the rise of "fake news” and threats of foreign intervention emerging after 2016, ongoing concerns of racially-targeted disinformation, and new threats related to the COVID-19 pandemic. Digital technologies will have played a more important role in the 2020 elections than ever before.

On November 4th at 10am PST, join the team at the Stanford Cyber Policy Center, in collaboration with the Freeman Spogli Institute, Stanford’s Institute for Human-Centered Artificial Intelligence, and the Stanford Center on Philanthropy and Civil Society, for a day-after discussion of the role of digital technologies in the 2020 Elections.  Speakers will include Nathaniel Persily, faculty co-director of the Cyber Policy Center and Director of the Program on Democracy and the Internet, Marietje Schaake, the Center’s International Policy Director and International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, Alex Stamos, Director of the Cyber Center’s Internet Observatory and former Chief Security Officer at Facebook and Yahoo, Renee DiResta, Research Manager at the Internet Observatory, Andrew Grotto, Director of the Center’s Program on Geopolitics, Technology, and Governance, and Rob Reich, Faculty Director of the Center for Ethics in Society, in conversation with Kelly Born, the Center’s Executive Director.

Please note that we will also have a YouTube livestream available for potential overflow or for anyone having issues connecting via Zoom: https://youtu.be/H2k62-JCAgE

 

0
renee-diresta.jpg

Renée DiResta is the former Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

You can see a full list of Renée's writing and speeches on her website: www.reneediresta.com or follow her @noupside.

 

Former Research Manager, Stanford Internet Observatory
Rob Reich
0
marietje.schaake

Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.


 

Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Date Label
-

Image
October 28 event reset reclaiming the internet for civil society

Image
Reset: Reclaiming the Internet for Civil Society book cover
Digital technologies are linked to a growing number of social and political maladies, including political repression, disinformation, and polarization. Accountability for these technologies is weak, allowing authoritarian rulers and bad actors to exploit the information landscape for their gain. A largely unregulated surveillance industry, innovations in technologies of remote control, dark PR firms, and “hack-for-hire” services feeding off rivers of poorly secured personal data also muddy the waters. This set of serious, democracy-unfriendly challenges calls for a deeper reexamination of our communications ecosystem. 

On October 28th at 10am Pacific Time, join the team at the Stanford Cyber Policy Center, in collaboration with author Ronald J. DeibertEileen Donahoe, Executive Director of the Global Digital Policy Incubator at Stanford University’s Cyber Policy Center, and Larry Diamond, co-lead for the Global Digital Policy Incubator in conversation with Kelly Born, the Center’s Executive Director.

 

 

CDDRL
Stanford University
Encina Hall, C147
616 Jane Stanford Way
Stanford, CA 94305-6055

(650) 724-6448 (650) 723-1928
0
Mosbacher Senior Fellow in Global Democracy at the Freeman Spogli Institute for International Studies
William L. Clayton Senior Fellow at the Hoover Institution
Professor, by courtesy, of Political Science and Sociology
diamond_encina_hall.png
MA, PhD

Larry Diamond is the William L. Clayton Senior Fellow at the Hoover Institution, the Mosbacher Senior Fellow in Global Democracy at the Freeman Spogli Institute for International Studies (FSI), and a Bass University Fellow in Undergraduate Education at Stanford University. He is also professor by courtesy of Political Science and Sociology at Stanford, where he lectures and teaches courses on democracy (including an online course on EdX). At the Hoover Institution, he co-leads the Project on Taiwan in the Indo-Pacific Region and participates in the Project on the U.S., China, and the World. At FSI, he is among the core faculty of the Center on Democracy, Development and the Rule of Law, which he directed for six and a half years. He leads FSI’s Israel Studies Program and is a member of the Program on Arab Reform and Development. He also co-leads the Global Digital Policy Incubator, based at FSI’s Cyber Policy Center. He served for 32 years as founding co-editor of the Journal of Democracy.

Diamond’s research focuses on global trends affecting freedom and democracy and on U.S. and international policies to defend and advance democracy. His book, Ill Winds: Saving Democracy from Russian Rage, Chinese Ambition, and American Complacency, analyzes the challenges confronting liberal democracy in the United States and around the world at this potential “hinge in history,” and offers an agenda for strengthening and defending democracy at home and abroad.  A paperback edition with a new preface was released by Penguin in April 2020. His other books include: In Search of Democracy (2016), The Spirit of Democracy (2008), Developing Democracy: Toward Consolidation (1999), Promoting Democracy in the 1990s (1995), and Class, Ethnicity, and Democracy in Nigeria (1989). He has edited or coedited more than fifty books, including China’s Influence and American Interests (2019, with Orville Schell), Silicon Triangle: The United States, China, Taiwan the Global Semiconductor Security (2023, with James O. Ellis Jr. and Orville Schell), and The Troubling State of India’s Democracy (2024, with Sumit Ganguly and Dinsha Mistree).

During 2002–03, Diamond served as a consultant to the US Agency for International Development (USAID) and was a contributing author of its report, Foreign Aid in the National Interest. He has advised and lectured to universities and think tanks around the world, and to the World Bank, the United Nations, the State Department, and other organizations dealing with governance and development. During the first three months of 2004, Diamond served as a senior adviser on governance to the Coalition Provisional Authority in Baghdad. His 2005 book, Squandered Victory: The American Occupation and the Bungled Effort to Bring Democracy to Iraq, was one of the first books to critically analyze America's postwar engagement in Iraq.

Among Diamond’s other edited books are Democracy in Decline?; Democratization and Authoritarianism in the Arab WorldWill China Democratize?; and Liberation Technology: Social Media and the Struggle for Democracy, all edited with Marc F. Plattner; and Politics and Culture in Contemporary Iran, with Abbas Milani. With Juan J. Linz and Seymour Martin Lipset, he edited the series, Democracy in Developing Countries, which helped to shape a new generation of comparative study of democratic development.

Download full-resolution headshot; photo credit: Rod Searcey.

Former Director of the Center on Democracy, Development and the Rule of Law
Date Label
Ron Diebert
-

Image
Digital Trade Wars

Please join the Cyber Policy Center, Wednesday, October 21, from 10 a.m. –11 a.m. pacific time, with host Marietje Schaake, International Policy Director of the Cyber Policy Center, in conversation with Dmitry Grozoubinski, founder of ExplainTrade.com, and visiting professor at University of Strathclyde, along with Anu Bradford, Henry L. Moses Professor of Law and International Organizations at Columbia Law School and author of How the European Union Rules the World, for a discussion and exploration of the digital trade war. 

This event is free and open to the public, but registration is required.

 

0
marietje.schaake

Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.


 

Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Date Label
Marietje Schaake
Anu Bradford
Dmitry Grozoubinski
Panel Discussions
-

Image
Man w/ iPad

Social media sites have now surpassed cable, network, and local TV as primary sources of political news for one-in-five Americans. Yet the speed and volume of online information, challenges discerning the credibility of online sources, and concerns about viral online disinformation place a significant burden on users. What do we know about what new user-facing digital literacy initiatives are underway, what the research has to say about the impact and effectiveness of media literacy interventions, and what the implications are for both corporate and government policy.

On Wednesday, October 14th, from 10 a.m. - 11 a.m. Pacific Time, please join Kelly Born, Executive Director of the Stanford Cyber Policy Center, in conversation with Jennifer Kavanaugh, of RAND’s Countering Truth Decay initiativeKristin Lord, President and CEO of IREX, and Claire Wardle, co-founder and director of First Draft, for a discussion on the state of Media Literacy.

Kristin Lord
Jennifer Kavanaugh
Claire Wardle
Seminars
Authors
Jody Berger
Daphne Keller
News Type
Q&As
Date
Paragraphs

Daphne Keller leads the newly launched Program on Platform Regulation a program designed to offer lawmakers, academics, and civil society groups ground-breaking analysis and research to support wise governance of Internet platforms.

Q: Facebook, YouTube and Twitter rely on algorithms and artificial intelligence to provide services for their users. Could AI also help in protecting free speech and policing hate speech and disinformation?   

DK: Platforms increasingly rely on artificial intelligence and other algorithmic means to automate the process of assessing – and sometimes deleting – online speech. But tools like AI can’t really “understand” what we are saying, and automated tools for content moderation make mistakes all the time. We should worry about platforms’ reliance on automation, and worry even more about legal proposals that would make such automated filters mandatory. Constitutional and human rights law give us a legal framework to push back on such proposals, and to craft smarter rules about the use of AI. I wrote about these issues in this New York Times op ed and in some very wonky academic analysis in the Journal of European and International IP Law.

Q: Can you explain the potential impacts on citizens’ rights when the platforms have global reach but governments do not?

DK: On one hand, people worry about platforms displacing the legitimate power of democratic governments. On the other hand, platforms can actually expand state power in troubling ways. One way they do that is by enforcing a particular country’s speech rules everywhere else in the world. Historically that meant a net export of U.S. speech law and values, as American companies applied those rules to their global platforms. More recently, we’ve seen that trend reversed, with things like European and Indian courts requiring Facebook to take user posts down globally – even if the users’ speech would be legally protected in other countries. Governments can also use soft power, or economic leverage based on their control of access to lucrative markets, to convince platforms to “voluntarily” globally enforce that country’s preferred speech rules. That’s particularly troubling, since the state influence may be invisible to any given users whose rights are affected.   

There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it... We have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms.
Daphne Keller
Director of Program on Platform Regulation, Cyber Policy Center Lecturer, Stanford Law School

Q: Are there other ways that platforms can expand state power? 

DK: Yes, platforms can let states bypass democratic accountability and constitutional limits by using private platforms as proxies for their own agenda. States that want to engage in surveillance or censorship are constrained by the U.S. Constitution, and by human rights laws around the world. But platforms aren’t. If you’re a state and you want to do something that would violate the law if you did it yourself, it’s awfully tempting to coerce or persuade a platform to do it for you. This issue of platforms being proxies for other actors isn’t limited to governments – anyone with leverage over a platform, including business partners, can potentially play a hidden role like this.

I wrote about this complicated nexus of state and private power in Who Do You Sue? for the Hoover Institution.    

Q: What inspired you to create the Program on Platform Regulation at the Cyber Policy Center right now?

DK: There is such a pressing need for thoughtful work on the laws that govern Internet platforms right now, and this is the place to do it. At the Cyber Policy Center, there’s an amazing group of experts, like Marietje Schaake, Eileen Donahoe, Alex Stamos and Nate Persily, who are working on overlapping issues. We can address different aspects of the same issues and build on each other’s work to do much more together than we could individually.

The program really benefits from being at Stanford and in Silicon Valley because we have access to the people who are making these decisions and who have the greatest expertise in the operational realities of the tech platforms. 

The Cyber Policy Center is part of the Freeman Spogli Institute for International Studies at Stanford University.

All News button
1
Subtitle

Keller explains some of the issues currently surrounding platform regulation

Paragraphs

This essay closely examines the effect on free-expression rights when platforms such as Facebook or YouTube silence their users’ speech. The first part describes the often messy blend of government and private power behind many content removals, and discusses how the combination undermines users’ rights to challenge state action. The second part explores the legal minefield for users—or potentially, legislators—claiming a right to speak on major platforms. The essay contends that questions of state and private power are deeply intertwined. To understand and protect internet users’ rights, we must understand and engage with both.

 

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
-

Image
Social Media and Democracy book symposium

Please join the Cyber Policy Center for a discussion of Social Media and Democracy: The State of the Field and Prospects for Reform, a new book with chapters by scholars and faculty at the Cyber Policy Center. The book explores the emerging multi-disciplinary field of social media and democracy, by synthesizing what we know, identifying what we do not know and obstacles to future research, and charting a course for the future inquiry. Chapters by leading scholars cover major topics – from disinformation to hate speech to political advertising – and situate recent developments in the context of key policy questions. In addition, the book canvasses existing reform proposals in order to address widely perceived threats that social media poses to democracy. 

Please note that we will also have a YouTube livestream available for potential overflow or for anyone having issues connecting via Zoom: https://youtu.be/KXtMB-3DlHc

REGISTER

 

AGENDA subject to change, with Q&A integrated throughout

  • 9 a.m.: Introduction with Nathaniel Persily, James B. McClatchy Professor of Law at Stanford Law School and the Faculty Co-Director of the Stanford Cyber Policy Center and Joshua A. Tucker, Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University
  • 9:15 a.m.-10:30 a.m.
    • Misinformation, Disinformation, and Online Propaganda with Andrew M. Guess, Assistant Professor of Politics and Public Affairs at Princeton University.
    • Online Hate Speech with Alexandra A. Siegel, Assistant Professor of Political Science at the University of Colorado Boulder
    • Bots and Computational Propaganda: Automation for Communication and Control with Samuel C. Woolley, Assistant Professor at the School of Journalism at the University of Texas at Austin
    • Online Political Advertising in the United States with Travis N. Ridout, Thomas S. Foley Distinguished Professor of Government and Public Policy in the School of Politics, Philosophy and Public Affairs at Washington State University and Co-Director of the Wesleyan Media Project
  • 10:30 a.m.: 10 min break
  • 10:40 a.m - 11:40 a.m.: 
    • Democratic Creative Destruction? The Effect of a Changing Media Landscape on Democracy with Rasmus Kleis Nielsen, Director of the Reuters Institute for the Study of Journalism and Professor of Political Communication at the University of Oxford
    • Misinformation and Its Correction with Adam J. Berinksy, Mitsui Professor of Political Science at Massachusetts Institute of Technology (MIT) and Director of the MIT Political Experiments Research Lab

    • Comparative Media Regulation in the United States and Europe with Francis Fukuyama, Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies and the Mosbacher Director of the Center on Democracy, Development, and the Rule of Law at Stanford University and Andrew Grotto, William J. Perry International Security Fellow at the Center for International Security and Cooperation, Research Fellow at the Hoover Institution, and Director of the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center

  • 11:40 a.m.: 10 min break
  • 11:50 a.m - 12:30 p.m.: 
    • Facts and Where to Find Them: Empirical Research on Internet Platforms and Content Moderation with Daphne Keller, Director of the Program on Platform Regulation at the Stanford Cyber Policy Center
    • Democratic Transparency in the Platform Society, with Robert Gorwa, doctoral student in the Department of Politics and International Relations at the University of Oxford
  • 12:30 p.m.Closing and final Q&A with Nathaniel Persily and Joshua A. Tucker

 

Adam J. Berinsky

Encina Hall, C148
616 Jane Stanford Way
Stanford, CA 94305

0
Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies
Director of the Ford Dorsey Master's in International Policy
Research Affiliate at The Europe Center
Professor by Courtesy, Department of Political Science
yff-2021-14290_6500x4500_square.jpg

Francis Fukuyama is Olivier Nomellini Senior Fellow at Stanford University's Freeman Spogli Institute for International Studies (FSI), and a faculty member of FSI's Center on Democracy, Development, and the Rule of Law (CDDRL). He is also Director of Stanford's Ford Dorsey Master’s in International Policy Program, and a professor (by courtesy) of Political Science.

Dr. Fukuyama has written widely on issues in development and international politics. His 1992 book, The End of History and the Last Man, has appeared in over twenty foreign editions. His most recent book,  Liberalism and Its Discontents, was published in the spring of 2022.

Francis Fukuyama received his B.A. from Cornell University in classics, and his Ph.D. from Harvard in Political Science. He was a member of the Political Science Department of the RAND Corporation and of the Policy Planning Staff of the US Department of State. From 1996-2000 he was Omer L. and Nancy Hirst Professor of Public Policy at the School of Public Policy at George Mason University, and from 2001-2010 he was Bernard L. Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University. He served as a member of the President’s Council on Bioethics from 2001-2004.  

Dr. Fukuyama holds honorary doctorates from Connecticut College, Doane College, Doshisha University (Japan), Kansai University (Japan), Aarhus University (Denmark), and the Pardee Rand Graduate School. He is a non-resident fellow at the Carnegie Endowment for International Peace. He is a member of the Board of Trustees of the Rand Corporation, the Board of Trustees of Freedom House, and the Board of the Volcker Alliance. He is a fellow of the National Academy for Public Administration, a member of the American Political Science Association, and of the Council on Foreign Relations. He is married to Laura Holmgren and has three children.

(October 2024)

CV
Date Label
Francis Fukuyama
Robert Gorwa
Andrew Guess
Andrew Grotto
0
daphne-keller-headshot.jpg

Popular press writing focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. Her recent work has focused on platform transparency, data collection for artificial intelligence, interoperability models, and “must-carry” obligations. She has testified before legislatures, courts, and regulatory bodies around the world on topics ranging from the practical realities of content moderation to copyright and data protection. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

SHORT PIECES

 

ACADEMIC PUBLICATIONS

 

POLICY PUBLICATIONS

 

FILINGS

  • U.S. Supreme Court amicus brief on behalf of Francis Fukuyama, NetChoice v. Moody (2024)
  • U.S. Supreme Court amicus brief with ACLU, Gonzalez v. Google (2023)
  • Comment to European Commission on data access under EU Digital Services Act
  • U.S. Senate testimony on platform transparency

 

PUBLICATIONS LIST

Director of Program on Platform Regulation, Cyber Policy Center
Lecturer, Stanford Law School
Date Label
Daphne Keller
Rasmus Kleis Nielsen
Nathaniel Persily
Travis Ridout
Alexandra A. Siegel
Joshua A. Tucker
Samuel C. Woolley
Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs
Image
Daphne Keller Blog header

 

Alex Feerst, one of the great thinkers about Internet content moderation, has a revealing metaphor about the real-world work involved. “You might go into it thinking that online information flows are best managed by someone with the equivalent of a PhD in hydrology,” he says. “But you quickly discover that what you really need are plumbers.” The daily work of enforcing Terms of Service, or honoring legal takedown demands under laws like the Digital Millennium Copyright Act (DMCA), is all about the plumbing. If you don’t identify rules and operational logistics that function at scale, then you won’t accomplish what you set out to do. If you’re trying to enforce Terms of Service, you’ll get erratic and unfair outcomes. If you’re trying to enforce laws by making platforms liable for users’ unlawful posts, you’ll incentivize removal of lawful speech, encouraging platforms to appease anyone who merely claims online speech is illegal.

Until recently, despite the seemingly daily drumbeat of legislative proposals in this area, none of the lawmakers seemed to have talked to any plumbers. Bills like FOSTA and EARN IT proclaimed important goals, but did not lay out a system to actually achieve them. The original version of the EARN IT Act in particular failed in this regard. Its goal was incredibly important: to combat the scourge of child sexual exploitation online. But its mechanism for achieving that goal was basically a punt. EARN IT told platforms’ operational teams that they would be subjected to some rules eventually – but left those rules to be determined by an unaccountable body at some later date, after the law was already in place.  The more recent EARN IT draft, which passed out of committee on July 2, is, in a way, worse. It gives up on the idea of setting clear rules for platform content moderation operations at all. Instead, it exposes platforms to an unknown array of state laws under vague standards, to be interpreted by courts at some future date — leaving companies to guess how they need to redesign their services to avoid huge civil fines or criminal prosecution.

Against this backdrop, the “Platform Accountability and Consumer Transparency Act” (the PACT Act), sponsored by Sens. Brian Schatz (D-HI) and John Thune (R-SD), is a huge step forward. That’s not to say I love it (or endorse it). There are parts I strongly disagree with. Other parts build out ideas that might be workable, but it depends on details that are not resolved yet in the draft, or that just plain need fixing. Still: This is an intellectually serious effort to grapple with the operational challenges of content moderation at the enormous scale of the Internet. That in itself makes it remarkable in today’s DC environment. We should welcome PACT as a vehicle for serious, rational debate on these difficult issues.  

Focusing on operational logistics, as PACT does, is important. But of course, to achieve its legitimate ends, the law still has to get those logistics right. In that sense, PACT has a long way to go. The process-based rules that it sets forth need a lot more tire-kicking. They should get at least the level of careful review and negotiation that the DMCA did back in 1998: a lot of meetings, among a lot of stakeholders, putting in enough time to truly hash out questions of operational detail for those harmed by online content, users accused of breaking the law, and platforms. We seem less able to have careful discussion now than we were back then. But we should be much better equipped to do it. Today we have not only a broad cadre of academic and civil society experts in intermediary liability -- many of whom signed onto these Seven Principles on point -- but also an entire field of professionals who work on content moderation or on “trust and safety” more broadly. We have the plumbers! They know how things work! If PACT’s sponsors are serious about proposing the best possible law, they should bring them in to fix this thing.  

Big picture, PACT has (1) a set of rules for platform liability for unlawful content, (2) a set of consumer protection-based rules that mostly affect platforms’ voluntary moderation under Terms of Service, and (3) a few non-binding items.

1. Rules Changing CDA 230 Immunity and Exposing Platforms to Liability for Unlawful User Content

A. The Court Order Standard: Courts should decide what’s illegal, and once they do, platforms should honor those decisions.

If you want to walk back protections of CDA 230, taking away immunity for content that platforms know was deemed illegal by a court after a fair process is the low-hanging fruit. But this “court order standard” is no panacea. On the one hand, it is subject to abuse when frivolous claimants get default judgments, or just falsify court orders. On the other, some would argue that it creates too high a bar if it lets platforms leave up highly damaging material that we think they should be able to recognize as illegal without waiting for a court to act. (Though the worst know-it-when-you-see-it illegal content is proscribed by federal criminal law, for which Section 230 does not immunize platforms anyway.) For all its flaws, though, this standard would eliminate immunity in some of the most egregious cases currently covered by 230. It is the standard that many international human rights experts, civil society groups, legislatures, and courts have arrived at after wrangling for years with the problems created by fuzzier standards.

The PACT court order standard does have some problems. First, the way the bill is drafted makes it far too easy for subsequent amendments to eliminate the court order requirement entirely. A few wording changes in the bill’s Definitions section would leave the overall law with a very problematic notice and takedown model for anything that an accuser merely alleges is illegal. (More on that in the next section.) Second, the list of harms that can be addressed by court orders is weird. PACT allows court-order based takedowns under any federal criminal or civil law, or under state defamation law. That leaves out a lot of real harms that are primarily addressed by state law, like non-consensual sexual images (“revenge porn”). Meanwhile, no one I’ve talked to is even sure what the universe of federal civil claims looks like. It seems unlikely to correspond to the online content people are most concerned about. This focus on federal civil law crops up throughout PACT, and to be honest I don’t understand why. (If platforms were interpreting the law, under notice and takedown, then the problems with exposing them to all those divergent state laws would be obvious. But courts do the interpreting under the court order standard, so diversity of state laws is not the issue.) Having a list of the specific claims PACT would authorize would really help anyone who wants to understand the law’s likely impact.

B. The Procedural Notice and Takedown Model: The law should specify a process for accusers to notify platforms about allegedly (or in PACT’s case, court-adjudicated) illegal content, for platforms to respond, and for accused speakers to defend themselves.

This is intermediary liability 101.  We already have a decent (if flawed) model for legally choreographed notice and takedown in the DMCA. Frankly, it’s embarrassing that other US legislative proposals so far have not bothered to include things like “counter-notice” opportunities for speakers targeted by takedown notices, or penalties for bad-faith accusers (who are legion). To be clear, even with procedural protections like these, notice and takedown based on private individuals’ allegations, rather than court adjudication, would be seriously problematic for many kinds of speech. Imagine if anyone alleging defamation could make platforms silence a #metoo post or remove a link to news coverage criticizing a politician, for example. 

Hecklers will always send bogus or abusive notices, platforms will always have incentives to comply, and offering appeals to victimized speakers won’t be enough to offset the problem. But any kind of process is worlds better than amorphous standards like liability for “recklessness” or for content that platforms “should” have known about. No one knows what those standards mean in practice, and no one but the biggest incumbent platforms will want to assume the risk and expense of litigating to find out. This kind of legal uncertainty hurts smaller competitors more than bigger ones, in addition to threatening the speech rights of ordinary Internet users.  

PACT’s notice and takedown process isn’t perfect. Perhaps most troubling is the requirement to take down content within 24 hours of notification -- a standard much like the one recently deemed to unconstitutionally infringe users’ expression rights under French legal standards. The PACT Act has some carve-outs to this obligation, but the bottom line is a powerful new pressure to comply, even in cases of serious uncertainty about important speech or information. That pressure is, in my opinion, too blunt an instrument. There are more granular problems in PACT’s takedown process rules, too. For example, I initially read PACT to require these notices (like DMCA notices) to spell out the exact URL or location of the illegal content. That’s pretty standard in notice and takedown systems. But on review I realized the notice can be much clumsier and broader, merely identifying the accused’s account.   

C. Empowering Federal Agency Enforcers: Platforms are not immune from federal agency enforcement of federal laws or regulations.

I believe my summary captures the PACT drafters’ intent, but to be honest, I am not 100% sure what the words mean (and I’ve heard the same confusion from others). The bill says that platforms lose immunity against enforcement “by the Federal Government’’ of any “Federal criminal or civil statute, or any regulations of an Executive agency (as defined in section 105 of title 5, United States Code) or an establishment in the legislative or judicial branch of the Federal Government.’’ I think that means that HUD, EPA, FDA, CPSM, and others can bring enforcement actions. But maybe it also opens up exposure to civil claims broadly from DOJ? Here again, we could all understand this better if we had a list of the kinds of claims at stake. My suspicion is that there are not all that many areas where (a) agencies have relevant enforcement power, and also (b) 230 would even matter as a defense. (I don’t think 230 is necessarily a defense to important HUD housing discrimination claims, for starters.) But without a list of claims, it’s hard to say.

However odd this is in practice, I think I understand the theoretical justification. Making platforms take content down based on any accuser’s legal claim has obvious problems, but waiting for a court to decide is slow and can limit access to justice for victims of real legal violations. So it’s natural to look for a compromise approach, and empowering trusted government agencies is an obvious one. (This has been done or proposed in a lot of countries and is quite controversial in some – basically where the agencies are least trusted by civil society.) If agencies don’t have the authority to require content removal for First Amendment or Administrative Procedure Act reasons, but they do have power to bring a court case, this essentially puts the “heckler’s veto” power to threaten litigation in the hands of government lawyers instead of private individuals. In principle, that’s a reasonable place to put it, and we should be able to expect them to use their power wisely. In practice, the Supreme Court has repeatedly had to stop government lawyers from telling publishers and distributors what they can say, or enable others to say. That’s precisely what the First Amendment is about. This is also a strange time in American history to consider handing this power to federal agencies, given very serious concerns about their politicization, and the President’s recent efforts to use federal authority to shape platforms’ editorial policies for user speech. 

D. Empowering State Attorney General Enforcers: State attorneys general can bring federal civil claims against platforms (if their states have analogous laws, and the state AG consults with the federal one). 

This opens up claims under any federal civil law, not just the ones agencies enforce. (Maybe this implicitly means we should interpret the agency provision above as an authorization for AG Barr to bring any federal civil claim against platforms, too?) This again seems weird, because it’s not clear why federal civil law is the relevant body of law, or why empowering State AGs to enforce it solves our most pressing problems. That’s especially the case if they will enforce easily politicized or subjective rules, like FTC Act standards for “fairness” of TOS enforcement. 

State AGs have a tendency to push for enforcement of disparate state approaches, which raises obvious problems in governing the Internet. Some also have a pretty serious history of financially or politically motivated shenanigans, including taking sides in ongoing power struggles between corporate titans in the content, tech, and telecoms industries. One state AG, for example, literally sent Google a threatening letter extensively redlined by MPAA lawyers. Wired reports similar concerns about News Corp’s role in cultivating state AG attention to Facebook in 2007. Opening up more state forum shopping for these fights under PACT, and potentially subjecting platforms to conflicting back-room political pressures from red and blue state AGs, makes me pretty uneasy.

E. Ensuring That Platforms Are Not Required to Actively Police User Speech: PACT appears to preserve this essential protection for user rights, but needs clarification.

Requiring platforms to proactively monitor their users’ communications is the third rail of intermediary liability law. In Europe, it has been the center of the biggest fights in recent years. In the U.S., making platforms actively review everything users say in search of legal violations could raise major issues under both the 1st and 4th Amendments. So far, U.S. law has steered far clear of this -- federal statutes on both copyright and child sexual abuse material, for example, expressly disclaim any monitoring requirements for intermediaries.

PACT appears to hew to this important principle with a standard immunity-is-not-conditioned-on-monitoring provision. But… it’s not entirely clear if this passage actually does the job. That’s especially the case given some troublingly fuzzy language about requiring platforms to not just take down content but also “stop… illegal activity” by users. It’s not clear what that language means, short of dispatching platforms to police users’ posts or carry out prior restraints on their speech. Getting this language buttoned up tighter will be critical if the bill moves forward. (This one is really an issue for the legal/Constitutional nerds, not the content moderation operations specialists.)

F. Regulating Consumer-Facing Edge Platforms, Not Internet Infrastructure: PACT has some limits, but needs more.  

Whatever we think the right legal obligations are for the Facebooks and YouTubes of the world, those probably are not the right obligations for Internet infrastructure providers. Companies that offer Internet access, routing, domain name resolution, content delivery networks, payment processing, and other technical or functional processing in the deeper layers of the Internet simply don’t work the same way. For one thing, they are blunt instruments. Many of them have no ability to take down just the image, post, or page that violates a law -- they can only shut off an entire website, service, or app.  

PACT takes a step toward carving these providers out of its scope, but it doesn’t go far enough. (It only carves them out to the extent that they are providing service to another 230-immunized entity.) This shouldn’t be hard to fix.

2. Rules Regarding Platforms’ Voluntary Measures to Enforce Terms of Service 

A. The Consumer Protection Model: If platforms are going to enforce private speech rules under their Terms of Service, they should state the rules clearly and enforce them consistently. Failure to do so is a consumer protection harm, like a bait-and-switch or a failure to label food correctly.

[Update July 17: Staff involved in the bill tell me that the intent of this section is for the FTC to enforce failures of process in TOS enforcement (like failure to offer appeals, publish transparency reports, etc.), not for the FTC to determine which substantive outcomes are correct under the platform's TOS. That's a meaningful difference and would mitigate some of the First Amendment concerns I mention below. Clarification in the text about this (especially re the FTC assessing "appropriate" steps by the platform) would help, since most people I consulted with about the bill and this post seemed to read (or mis-read) the provision the same way I did.]

Consumer protection provides a useful framing, and one that critics on the left and right can often agree on. European regulators used consumer protection law to reach agreements about TOS enforcement with platforms in 2018, and ideas along these lines keep coming up inside and outside the United States. At a (very) high level, I like the idea that platforms should have to make clear commitments to users, and uphold them.  But there are very real operational and constitutional issues to be resolved. 

As a practical matter, I have a lot of questions. Exactly how much detail can we reasonably demand from platforms explaining their rules – is it enough if they provide the same level of detail as state legal codes, for example? Are they supposed to notify users every time some new form of Internet misbehavior crops up and prompts them to update the rules (which happens all the time)? How far do we want to go in having courts or regulators second-guess platforms in hard judgment calls? Speaking of those courts or regulators, how would we possibly staff them for the inevitable deluge of disputes? (I think PACT’s answer is that the FTC brings the occasional enforcement action but isn’t required to handle any particular complaint.)

Then there are the constitutional issues. If a regulator rejects the platform’s interpretation of its own rules in a hard case, is it essentially overriding platform editorial policy, and does that violate platforms’ First Amendment rights? Is the government essentially picking winners and losers among lawful user posts, and does that violate the users’ First Amendment rights? Even without activist court or agency interpretations, is it a problem generally to use consumer protection law to restrict what are essentially editorial choices? (This really isn’t the same as food labeling, for example. As the Supreme Court explained in rejecting a similar analogy years ago, “[t]here is no specific constitutional inhibition against making the distributors of food the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller.”) 

For all my sympathy for this approach, many of the things that trouble me most about PACT are in these sections. This is one place where the bill could most benefit from careful review by the plumbers I mentioned. I am not an operations specialist by Silicon Valley standards, and I did not try to vet every aspect of this proposal. But even I can see a lot of issues.  The bill requires many platforms to offer call centers, for example, not just the web forms and online communications typically used today. A noted trust and safety expert, Dave Willner, said in a panel I attended that this would lead to worse and slower outcomes for most users trying to solve real problems. He concluded that “you’d be better off taking the cash this would cost and burning it for heat.” (That’s a paraphrase. In spirit, it is the same thing I’ve heard from a lot of people.)  PACT also requires 14-day turnaround time for responding to notifications, which sounds good in theory but in practice may be truly difficult for small platforms facing hard judgment calls or sudden increases in traffic, notifications, or abuse. Even for larger platforms, a standard like this could force them to prioritize recent notices at the expense of ones that may be more serious (identifying more harmful content) or accurate (coming from a source with a good track record).  

B. Transparency Reports: Making platforms (or platforms over a certain size) publish aggregate data about content moderation, in a format that permits meaningful comparison between platforms.

I am a huge fan of transparency. Without it, we will stay where we are now: people on all sides of the platform content moderation issues will keep slinging anecdotes like mud. The ones with more power and media access will get more attention to their anecdotes. That’s a terrible basis for lawmaking. If lawmakers have the power to get the facts first and legislate second, we will all be better off.

That said, tracking more detail about the grounds for every content demotion or removal; what kind of entity sent the notice; what rule it violated for what reason; the role of automation; whether there was an appeal; etc. all add up to a fair amount of work – especially for smaller companies. They can’t track everything. The challenge is orders of magnitude bigger for transparency about what PACT calls “deprioritization” of content. Google, for example, adjusts its algorithm some 500-600 times per year, affecting literally trillions of possible search outcomes. It’s not clear what meaningful and useful transparency about that even looks like. Those of us who advocate for transparency should be smart about what precise information we ask for, so we get the optimal bang for our societal buck (and so we don’t fail to ask for something that will later turn out to matter a lot). Right now, although people like me put out lists of possible asks, and researchers who have been trying and too often failing to get important information from platforms often have very specific critiques of current transparency measures, we don't really have an informed consensus on what the priorities should be. So here, too, I say: bring on the plumbers, including both content moderation professionals and outside researchers.   

There’s a constitutional question here, too, though -- and it might be a really big one. Could transparency requirements be unconstitutional compelled speech (as the 4th Circuit recently found campaign ad transparency requirements for hosts were in Washington Post v. McManus)? Would they be like making the New York Times justify every decision to accept or reject a letter to the editor, or a wedding announcement? I haven't tried to answer that question. But we'll need to if this or other transparency legislation efforts move forward.

3. Non-Binding (I hope) Items

A. Considering future whistleblower protections:  The Government Accountability Office is directed to issue a report on the idea of protections and awards for platform employees who disclose “violations of consumer protection,” meaning improper TOS enforcement in content moderation.

I love whistleblowers. I even represented them at one point. But I shudder to think how politically loaded this particular kind of “whistleblowing” will be. The idea that a dissatisfied employee can bring ideologically grounded charges to whichever FTC Commission offices are staffed by members of his or her political party makes me want to hide in a cave. As one former platform employee told me, “bounties for selective leaking of stylized evidence against teammates is an episode of Black Mirror I’d be too scared to watch." This scenario is enough to make me wonder if those First Amendment concerns I touched on above – the ones about a government agency looking for opportunities to effectively dictate platform editorial policy in the guise of interpreting the platform’s own rules – are actually a really, really big deal. But… in any case, this whistleblower provision isn’t anything mandatory, for now.

B. A voluntary standards framework. The National Institute of Standards and Technology is directed to convene experts and issue non-binding guidelines on topics like information-sharing and use of automation.

This shares a suspicious resemblance with the “best practices” in EARN IT. Those were nominally not required in that bill’s original draft (but were in fact a prerequisite for preserving immunity). They are even more not required in EARN IT’s current draft (unless they become de facto standards for liability under the raft of state laws EARN IT unleashes). Perhaps I am naïve, but I am less worried about the voluntary standards proposed in PACT. For one thing, they won’t be crafted by nominees put in place by a who’s-who of DC heavy-hitters, as EARN IT’s would be. And the specific topics listed in the PACT Act – like developing technical standards to authenticate court orders – don’t all look like hooks for liability, like the ones in EARN IT. Most importantly, though, because PACT does not open platforms up to a flood of individual allegations under vague state laws, it leaves fewer legal blanks to be filled in by things like “voluntary” best practices or standards. Of course, the NIST standards might still come into play under PACT for courts assessing agency or AG enforcement of federal laws (or for platforms deciding whether to do what courts and AGs demand behind closed doors, to avoid going to court). So I may come to regret my optimism in calling them non-binding.   

Conclusion

I can’t tell you what to think about PACT. That’s in part because I am still trying to understand some of its key provisions. (What are these federal civil laws it talks about? Will DOJ be enforcing them? What are the logistics and First Amendment ramifications of its FTC consumer protection model for TOS-based content moderation?) But it’s also in part because its core ideas are things where reasonable minds might differ. It’s not disingenuous nonsense, and it’s not a list of words that sound plausible on paper but that legal experts know are meaningless or worse. It’s a list of serious ideas, imperfectly executed. If you like any of them, you should be rooting for lawmakers to do the work to figure out how to refine them into something more operationally feasible. You should be calling on lawmakers to bring in the plumbers.   

All News button
1
-

Image
Avi Tuschman, Adam Berinsky, David Rand

Please join the Cyber Policy Center for Exploring Potential “Solutions” to Online Disinformation​, hosted by Cyber Policy Center's Kelly Born, with guests Adam Berinsky, Mitsui Professor of Political Science at MIT and Director of the MIT Political Experiments Research Lab (PERL) at MIT, David Rand, Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, and Director of the Human Cooperation Laboratory and the Applied Cooperation Team at MIT, and Avi Tuschman, Founder & CIO, Pinpoint Predictive. The session is open but registraton is required.

Adam Berinsky is the Mitsui Professor of Political Science at MIT and serves as the director of the MIT Political Experiments Research Lab (PERL). He is also a Faculty Affiliate at the Institute for Data, Systems, and Society (IDSS). Berinsky received his PhD from the University of Michigan in 2000. He is the author of "In Time of War: Understanding American Public Opinion from World War II to Iraq" (University of Chicago Press, 2009). He is also the author of "Silent Voices: Public Opinion and Political Participation in America" (Princeton University Press, 2004) and has published articles in many journals. He is currently the co-editor of the Chicago Studies in American Politics book series at the University of Chicago Press. He is also the recipient of multiple grants from the National Science Foundation and was a fellow at the Center for Advanced Study in the Behavioral Sciences.

David Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. Bridging the fields of behavioral economics and psychology, David’s research combines mathematical/computational models with human behavioral experiments and online/field studies to understand human behavior. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, perceived accuracy of false or misleading news stories, political preferences, and the dynamics of social media platform behavior. 

Avi Tuschman is a Stanford StartX entrepreneur and founder of Pinpoint Predictive, where he currently serves as Chief Innovation Officer and Board Director. He’s spent the past five years developing the first Psychometric AI-powered data-enrichment platform, which ranks 260 million individuals for performance marketing and risk management applications. Tuschman is an expert on the science of heritable psychometric traits. His book and research on human political orientation have been covered in peer-reviewed and mainstream media from 25 countries. Previous to his career in tech, he advised current and former heads of state as well as multilateral development banks in the Western Hemisphere. Tuschman completed his undergraduate and doctoral degrees in evolutionary anthropology at Stanford.

News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

Subscribe to Russia and Eurasia