Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs

I am a huge fan of transparency about platform content moderation. I’ve considered it a top policy priority for years, and written about it in detail (with Paddy Leerssen, who also wrote this great piece about recommendation algorithms and transparency). I sincerely believe that without it, we are unlikely to correctly diagnose current problems or arrive at wise legal solutions.

So it pains me to admit that I don’t really know what “transparency” I’m asking for. I don’t think many other people do, either. Researchers and public interest advocates around the world can agree that more transparency is better. But, aside from people with very particular areas of interest (like political advertising), almost no one has a clear wish list. What information is really important? What information is merely nice to have? What are the trade-offs involved?

That imprecision is about to become a problem, though it’s a good kind of problem to have. A moment of real political opportunity is at hand. Lawmakers in the USEurope, and elsewhere are ready to make some form of transparency mandatory. Whatever specific legal requirements they create will have huge consequences. The data, content, or explanations they require platforms to produce will shape our future understanding of platform operations, and our ability to respond — as consumers, as advocates, or as democracies. Whatever disclosures the laws don’t require, may never happen.

It’s easy to respond to this by saying “platforms should track all the possible data, we’ll see what’s useful later!” Some version of this approach might be justified for the very biggest “gatekeeper” or “systemically important” platforms. Of course, making Facebook or Google save all that data would be somewhat ironic, given the trouble they’ve landed in by storing similar not-clearly-needed data about their users in the past. (And the more detailed data we store about particular takedowns, the likelier it is to be personally identifiable.)

For any platform, though, we should recognize that the new practices required for transparency reporting comes at a cost. That cost might include driving platforms to adopt simpler, blunter content rules in their Terms of Service. That would reduce their expenses in classifying or explaining decisions, but presumably lead to overly broad or narrow content prohibitions. It might raise the cost of adding “social features” like user comments enough that some online businesses, like retailers or news sites, just give up on them. That would reduce some forms of innovation, and eliminate useful information for Internet users. For small and midsized platforms, transparency obligations (like other expenses related to content moderation) might add yet another reason to give up on competing with today’s giants, and accept an acquisition offer from an incumbent that already has moderation and transparency tools. Highly prescriptive transparency obligations might also drive de facto standardization and homogeneity in platform rules, moderation practices, and features.

None of these costs provides a reason to give up on transparency — or even to greatly reduce our expectations. But all of them are reasons to be thoughtful about what we ask for. It would be helpful if we could better quantify these costs, or get a handle on what transparency reporting is easier and harder to do in practice.

I’ve made a (very in the weeds) list of operational questions about transparency reporting, to illustrate some issues that are likely to arise in practice. I think detailed examples like these are helpful in thinking through both which kinds of data matter most, and how much precision we need within particular categories. For example, I personally want to know with great precision how many government orders a platform received, how it responded, and whether any orders led to later judicial review. But to me it seems OK to allow some margin of error for platforms that don’t have standardized tracking and queuing tools, and that as a result might modestly mis-count TOS takedowns (either by absolute numbers or percent).

I’ll list that and some other recommendations below. But these “recommendations” are very tentative. I don’t know enough to have a really clear set of preferences yet. There are things I wish I could learn from technologists, activists, and researchers first. The venues where those conversations would ordinarily happen — and, importantly, where observers from very different backgrounds and perspectives could have compared the issues they see, and the data they most want — have been sadly reduced for the past year.

So here is my very preliminary list:

  • Transparency mandates should be flexible enough to accommodate widely varying platform practices and policies. Any de facto push toward standardization should be limited to the very most essential data.
  • The most important categories of data are probably the main ones listed in the DSA: number of takedowns, number of appeals, number of successful appeals. But as my list demonstrates, those all can become complicated in practice.
  • It’s worth taking the time to get legal transparency mandates right. That may mean delegating exact transparency rules to regulatory agencies in some countries, or conducting studies prior to lawmaking in others.
  • Once rules are set, lawmakers should be very reluctant to move the goalposts. If a platform (especially a smaller one) invests in rebuilding its content moderation tools to track certain categories of data, it should not have to overhaul those tools soon because of changed legal requirements.
  • We should insist on precise data in some cases, and tolerate more imprecision in others (based on the importance of the issue, platform capacity, etc.). And we should take the time to figure out which is which.
  • Numbers aren’t everything. Aggregate data in transparency reports ultimately just tell us what platforms themselves think is going on. To understand what mistakes they make, or what biases they may exhibit, independent researchers need to see the actual content involved in takedown decisions. (This in turn raises a slough of issues about storing potentially unlawful content, user privacy and data protection, and more.)

It’s time to prioritize. Researchers and civil society should assume we are operating with a limited transparency “budget,” which we must spend wisely — asking for the information we can best put to use, and factoring in the cost. We need better understanding of both research needs and platform capabilities to do this cost-benefit analysis well. I hope that the window of political opportunity does not close before we manage to do that.

Daphne Keller

Daphne Keller

Director of the Program on Platform Regulation
BIO

Read More

Daphne Keller QA
Q&As

Q&A with Daphne Keller of the Program on Platform Regulation

Keller explains some of the issues currently surrounding platform regulation
Q&A with Daphne Keller of the Program on Platform Regulation
Hero Image
getty image of person holding transparent phone Getty Images
All News button
1
Subtitle

In a new blog post, Daphne Keller, Director of the Program on Platform Regulation at the Cyber Policy Center, looks at the need for transparency when it comes to content moderation and asks, what kind of transparency do we really want?

Paragraphs

Hate speech is a contextual phenomenon. What offends or inflames in one context may differ from what incites violence in a different time, place, and cultural landscape. Theories of hate speech, especially Susan Benesch’s concept of “dangerous speech” (hateful speech that incites violence), have focused on the factors that cut across these paradigms. However, the existing scholarship is narrowly focused on situations of mass violence or societal unrest in America or Europe.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Subtitle

Published by Michigan Law School Scholarship Repository

Journal Publisher
Michigan Law School Scholarship Repository
Authors
Brittan Heller
Number
2
-
Karen Nershi headshot on a blue background with Fall Seminar Series in white font

Join the Cyber Policy Center and moderator  Daniel Bateyko in conversation with Karen Nershi for How Strong Are International Standards in Practice?:  Evidence from Cryptocurrency Transactions. 

The rise of cryptocurrency (decentralized digital currency) presents challenges for state regulators given its connection to illegal activity and pseudonymous nature, which has allowed both individuals and businesses to circumvent national laws through regulatory arbitrage. Karen Nershi assess the degree to which states have managed to regulate cryptocurrency exchanges, providing a detailed study of international efforts to impose common regulatory standards for a new technology. To do so, she introduces a dataset of cryptocurrency transactions collected during a two-month period in 2020 from exchanges in countries around the world and employ bunching estimation to compare levels of unusual activity below a threshold at which exchanges must screen customers for money laundering risk. She finds that exchanges in some, but not all, countries show substantial unusual activity below the threshold; these findings suggest that while countries have made progress toward regulating cryptocurrency exchanges, gaps in enforcement across countries allow for regulatory arbitrage. 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Karen Nershi is a Postdoctoral Fellow at Stanford University's Stanford Internet Observatory and the Center for International Security and Cooperation (CISAC). In the summer of 2021, she completed her Ph.D. in political science at the University of Pennsylvania specializing in the fields of international relations and comparative politics. Through an empirical lens, her research examines questions of international cooperation and regulation within international political economy, including challenges emerging from the adoption of decentralized digital currency and other new technologies. 

Specific topics Dr. Nershi explores in her research include ransomware, cross-national regulation of the cryptocurrency sector, and international cooperation around anti-money laundering enforcement. Her research has been supported by the University of Pennsylvania GAPSA Provost Fellowship for Innovation and the Christopher H. Browne Center for International Politics. 

Before beginning her doctorate, Karen Nershi earned a B.A. in International Studies with honors at the University of Alabama. She lived and studied Arabic in Amman, Jordan and Meknes, Morocco as a Foreign Language and Area Studies Fellow and a Critical Language Scholarship recipient. She also lived and studied in Mannheim, Germany, in addition to interning at the U.S. Consulate General Frankfurt (Frankfurt, Germany).

Dan Bateyko is the Special Projects Manager at the Stanford Internet Observatory.

Dan worked previously as a Research Coordinator for The Center on Privacy & Technology at Georgetown Law, where he investigated Immigration and Customs Enforcement surveillance practices, co-authoring American Dragnet: Data-Drive Deportation in the 21st Century. He has worked at the Berkman Klein Center for Internet & Society, the Dangerous Speech Project, and as a research assistant for Amanda Levendowski, whom he assisted with legal scholarship on facial surveillance.

In 2016, he received a Thomas J. Watson Fellowship. He spent his fellowship year talking with people about digital surveillance and Internet infrastructure in South Korea, China, Malaysia, Germany, Ghana, Russia, and Iceland. His writing has appeared in Georgetown Tech Law Review, Columbia Journalism Review, Dazed Magazine, The Internet Health Report, Council on Foreign Relations' Net Politics, and Global Voices. He is a 2022 Internet Law & Policy Foundry Fellow.

Dan received his Masters of Law & Technology from Georgetown University Law Center (where he received the IAPP Westin Scholar Book Award for excellence in Privacy Law), and his B.A. from Middlebury College.

Karen Nershi
Seminars
-
robert robertson headshot fall seminar series text on blue background

Join the Program on Democracy and the Internet (PDI) and moderator Alex Stamos in conversation with Ronald E. Robertson for Engagement Outweighs Exposure to Partisan and Unreliable News within Google Search 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

If popular online platforms systematically expose their users to partisan and unreliable news, they could potentially contribute to societal issues like rising political polarization. This concern is central to the echo chamber and filter bubble debates, which critique the roles that user choice and algorithmic curation play in guiding users to different online information sources. These roles can be measured in terms of exposure, the URLs seen while using an online platform, and engagement, the URLs selected while on that platform or browsing the web more generally. However, due to the challenges of obtaining ecologically valid exposure data--what real users saw during their regular platform use--studies in this vein often only examine engagement data, or estimate exposure via simulated behavior or inference. Despite their centrality to the contemporary information ecosystem, few such studies have focused on web search, and even fewer have examined both exposure and engagement on any platform. To address these gaps, we conducted a two-wave study pairing surveys with ecologically valid measures of exposure and engagement on Google Search during the 2018 and 2020 US elections. We found that participants' partisan identification had a small and inconsistent relationship with the amount of partisan and unreliable news they were exposed to on Google Search, a more consistent relationship with the search results they chose to follow, and the most consistent relationship with their overall engagement. That is, compared to the news sources our participants were exposed to on Google Search, we found more identity-congruent and unreliable news sources in their engagement choices, both within Google Search and overall. These results suggest that exposure and engagement with partisan or unreliable news on Google Search are not primarily driven by algorithmic curation, but by users' own choices.

Dr. Ronald E Robertson received his Ph.D. in Network Science from Northeastern University in 2021. He was advised by Christo Wilson, a computer scientist, and David Lazer, a political scientist. For his research, Dr. Robertson uses computational tools, behavioral experiments, and qualitative user studies to measure user activity, algorithmic personalization, and choice architecture in online platforms. By rooting his questions in findings and frameworks from the social, behavioral, and network sciences, his goal is to foster a deeper and more widespread understanding of how humans and algorithms interact in digital spaces. Prior to Northeastern, Dr. Robertson obtained a BA in Psychology from the University of California San Diego and worked with research psychologist Robert Epstein at the American Institute for Behavioral Research and Technology.

Alex Stamos
0
ronald-e-robertson-2024.jpg PhD

Dr. Ronald E Robertson received his Ph.D. in Network Science from Northeastern University in 2021. He was advised by Christo Wilson, a computer scientist, and David Lazer, a political scientist. For his research, Dr. Robertson uses computational tools, behavioral experiments, and qualitative user studies to measure user activity, algorithmic personalization, and choice architecture in online platforms. By rooting his questions in findings and frameworks from the social, behavioral, and network sciences, his goal is to foster a deeper and more widespread understanding of how humans and algorithms interact in digital spaces.

Prior to Northeastern, Dr. Robertson obtained a BA in Psychology from the University of California San Diego and worked with research psychologist Robert Epstein at the American Institute for Behavioral Research and Technology.

Research Scientist
Date Label
Seminars
-
l jean camp headshot on blue background

Join the Program on Democracy and the Internet (PDI) and moderator Andrew Grotto, in conversation with L. Jean Camp for Create a Market for Safe, Secure Software

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Today the security market, particularly in embedded software and Internet of Things (IoT) devices, is a lemons market.  Buyers simply cannot distinguish between secure and insecure products. To enable the market for secure high quality products to thrive,  buyers need to have some knowledge of the contents of these digital products. Once purchased, ensuring a product or software package remains safe requires knowing if these include publicly disclosed vulnerabilities. Again this requires knowledge of the contents.  When consumers do not know the contents of their digital products, they can not know if they are at risk and need to take action.

The Software Bill of Materials  is a proposal that was identified as a critical instrument for meeting these challenges and securing software supply chains in the Executive Order on Improving the Nation’s Cybersecurity} by the Biden Administration (EO 14028. In this presentation Camp will introduce SBOMs, provide examples, and explain the components that are needed in the marketplace for this initiative to meet its potential.

Jean Camp is a Professor at Indiana University with appointments in Informatics and Computer Science.  She is a Fellow of the AAAS (2017), the IEEE (2018), and the ACM (2021).  She joined Indiana after eight years at Harvard’s Kennedy School. A year after earning her doctorate from Carnegie Mellon she served as a Senior Member of the Technical Staff at Sandia National Laboratories. She began her career as an engineer at Catawba Nuclear Station after a double major in electrical engineering and mathematics, followed by a MSEE in optoelectronics at University of North Carolina at Charlotte.

L. Jean Camp Professor at Indiana University
Seminars
-
Aleksandra Kuczerawy headshot on a blue background with text European Developments in Internet Regulation

Join the Program on Democracy and the Internet (PDI) and moderator Daphne Keller, in conversation with Aleksandra Kuczerawy for European Developments in Internet Regulation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

The Digital Services Act is a new landmark European Union legislation addressing illegal and harmful content online. Its main goals are to create a safer digital space but also to enhance protection of fundamental rights online. In this talk, Aleksandra Kuczerawy will discuss the core elements of the DSA, such as the layered system of due diligence obligations, content moderation rules and the enforcement framework, while providing underlying policy context for the US audience.

Aleksandra Kuczerawy is a postdoctoral scholar at the Program on Platform Regulation and has been a postdoctoral researcher at KU Leuven’s Centre for IT & IP Law and is assistant editor of the International Encyclopedia of Law (IEL) – Cyber Law. She has worked on the topics of privacy and data protection, media law, and the liability of Internet intermediaries since 2010 (projects PrimeLife, Experimedia, REVEAL). In 2017 she participated in the works of the Committee of experts on Internet Intermediaries (MSI-NET) at the Council of Europe, responsible for drafting a recommendation by the Committee of Ministers on the roles and responsibilties of internet intermediaries and a study on Algorithms and Human Rights.

Daphne Keller
Aleksandra Kuczerawy Postdoctoral Scholar at the Program on Platform Regulation (PPR)
Seminars
-
transatlantic summit text on blue background with globe

Please note, event is now sold out, though waitlist is available through the registration link above.

The Transatlantic Summit is where the worlds of cutting-edge research, industry, and policy come together to find answers on geopolitics, digital platforms and emerging tech as well as digital sovereignty. Whether you're an industry leader, policy maker, or student - join the start of a new Transatlantic movement seeking synergies between technology and society and become part of the international conversation going forward.

About:

  • Creates a vibrant forum for a dialogue between the US and Europe in Silicon Valley about the impact of digital technologies on business and society
  • Builds a strong network for German American collaboration in digital innovation, business, and geopolitics
  • Excite, connect and inspire: Participants meet the movers and shakers of the digital future from business, academia, and politics

 

Topics:

  1. Digital Sovereignty
  2. Geopolitics of Emerging Technologies
  3. Digital Platforms and Misinformation

 

The conference, which is jointly organized by the German Federal Foreign Office, The Representatives of German Business (GAAC West), German Consulate General of San Francisco, Stanford German Student Association and Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center addresses current discussions about digital technologies, business and society. Join us and get inspired by our series of speakers and networking sessions to bring together leaders, politicians, students, and changemakers.

Digital Sovereignty and Multilateral Collaboration

Digital sovereignty vs. cooperation: What should the future of the transatlantic partnership on digital policies look like, and how do we reach it?

Technology increasingly sits at the epicenter of geopolitics. In recent years, the notion of technological or digital sovereignty has emerged in Europe as a means of promoting the notion of European leadership and strategic autonomy in the digital field. On the other side of the Atlantic, the United States find themselves in an increasingly fierce race with China for global technology dominance. Against this backdrop, cooperation between the European Union and the United States may be more critical than ever. This raises important questions: What does Europe's move toward digital sovereignty and self- determination mean for the transatlantic partnership? And how should the US and EU balance sovereignty and cooperation in digital and technology policy? Our panel will explore tensions between sovereignty and cooperation and what the future of transatlantic policy may look like on issues from data protection to semiconductors, in light of the rising technological influence and ambitions of China.

John Zysman, Professor Emeritus, UC Berkeley
Maryam Cope, Head of Government Affairs, ASML U. S.
Hannah Bracken, Policy Advisor -Privacy Shield, U.S. Department of Commerce
Adriana Groh, Co-Founder, Sovereign Tech Fund

Agenda & Speakers

Transatlantic Summit: Sovereignty vs. Cooperation in the Digital Era
Thursday, Nov. 17th, 2022, 9:00am – 6:00pm PT
Vidalakis Dining Hall, Schwab Residential Center Stanford, CA 94305

FULL AGENDA
Download pdf
SPEAKER BIOS
Download pdf
Conferences
-

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Aleksandra Kuczerawy for European Developments in Internet Regulation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford affiliation only) and virtual attendance (open to public) is available; registration is required.

Aleksandra Kuczerawy is a postdoctoral scholar at the Program on Platform Regulation and has been a postdoctoral researcher at KU Leuven’s Centre for IT & IP Law and is assistant editor of the International Encyclopedia of Law (IEL) – Cyber Law. She has worked on the topics of privacy and data protection, media law, and the liability of Internet intermediaries since 2010 (projects PrimeLife, Experimedia, REVEAL). In 2017 she participated in the works of the Committee of experts on Internet Intermediaries (MSI-NET) at the Council of Europe, responsible for drafting a recommendation by the Committee of Ministers on the roles and responsibilties of internet intermediaries and a study on Algorithms and Human Rights.

Aleksandra Kuczerawy Postdoctoral Scholar at the Program on Platform Regulation (PDI)
Seminars
-
chenyan jia headshot on flyer

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Chenyan Jia for The Evolving Role of AI In Political News Consumption: The Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford affiliation only) and virtual attendance (open to the public) is available; registration is required.

Chenyan Jia (Ph.D., The University of Texas at Austin) is a postdoctoral scholar in The Program on Democracy and the Internet (PDI) at Stanford University. In 2023 Fall, she will be joining Northeastern University as an Assistant Professor in the School of Journalism in the College of Arts, Media, and Design with a joint appointment in the Khoury College of Computer Sciences. She has been working as a research assistant for UT's Human–AI Interaction Lab.

Her research interests lie at the intersection of communication and human-computer interaction. Her work has examined (a) the influence of emerging media technologies such as automated journalism and misinformation detection algorithms on people’s political attitudes and news consumption behaviors; (b) the political bias in news coverage through NLP techniques; (c) how to leverage AI technologies to reduce bias and promote democracy.

Her research has appeared in mass communication journals and top-tier AI and HCI venues including Human-Computer Interaction Journal (CSCW), Journal of Artificial Intelligence, International Journal of Communication, Media and Communication, ICLR, ICWSM, EMNLP, ACL, and AAAI. Her research has been awarded the Best Paper Award at AAAI 21. She was the recipient of the Harrington Dissertation Fellowship and the Dallas Morning News Graduate Fellowship for Journalism Innovation.

YOUTUBE RECORDING

Chenyan Jia Postdoctoral Scholar at the Program on Democracy and the Internet (PDI) 
Seminars
-
meicen sun headshot on blue background advertising seminar

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Meicen Sun for Internet Control as A Winning Strategy: How the Duality of Information Consolidates Autocratic Rule in the Digital Age.

This paper advances a new theory on how the Internet as a digital technology helps consolidate autocratic rule. Exploiting a major Internet control shock in China in 2014, this paper finds that Chinese data-intensive firms have gained from Internet control a 10% increase in revenue over other Chinese firms, and about 1-2% over their U.S. competitors. Meanwhile, the same Internet control has incurred an up to 25% reduction in research quality for Chinese scholars conditional on the knowledge-intensity of their discipline. This occurred specifically via a reduction in the access to cutting-edge knowledge from the outside world. These findings suggest that while politically motivated information flow restrictions do take a toll on the country’s long-term capacity for innovation, they lend a short-term benefit to its data-intensive sectors. Conventional wisdom on the inherent limit to information control by autocracies overlooks this crucial protectionist benefit that aids in autocratic power consolidation in the digital age. 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person and virtual attendance is available; registration is required.

Meicen Sun is a postdoctoral scholar with the Program on Democracy and the Internet at Stanford University. Her research examines the political economy of information and the effect of information policy on the future of innovation and state power. Her writings have appeared in academic and policy outlets including Foreign Policy Analysis, Harvard Business Review, World Economic Forum, the Asian Development Bank Institute, and The Diplomat among others. She had previously conducted research at the Center for Strategic and International Studies and at Georgetown University in Washington, DC, and at the UN Regional Centre for Peace and Disarmament in Africa. Bilingual in English and Chinese, she has also written stories, plays, and music and staged many of her works -- in both languages -- in China, Singapore and the U.S. Sun has served as a Fellow on the World Economic Forum's Global Future Council on China and as a Research Affiliate with the MIT Initiative on the Digital Economy. She holds an A.B. with Honors from Princeton University, an A.M. with a Certificate in Law from the University of Pennsylvania, and a Ph.D from the Massachusetts Institute of Technology.

Meicen Sun Postdoctoral scholar with the Program on Democracy and the Internet
Seminars
News Feed Image
meicen-sun-seminar-v2_1.png
Subscribe to The Americas