Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs
Image
Daphne Keller Blog header

 

Alex Feerst, one of the great thinkers about Internet content moderation, has a revealing metaphor about the real-world work involved. “You might go into it thinking that online information flows are best managed by someone with the equivalent of a PhD in hydrology,” he says. “But you quickly discover that what you really need are plumbers.” The daily work of enforcing Terms of Service, or honoring legal takedown demands under laws like the Digital Millennium Copyright Act (DMCA), is all about the plumbing. If you don’t identify rules and operational logistics that function at scale, then you won’t accomplish what you set out to do. If you’re trying to enforce Terms of Service, you’ll get erratic and unfair outcomes. If you’re trying to enforce laws by making platforms liable for users’ unlawful posts, you’ll incentivize removal of lawful speech, encouraging platforms to appease anyone who merely claims online speech is illegal.

Until recently, despite the seemingly daily drumbeat of legislative proposals in this area, none of the lawmakers seemed to have talked to any plumbers. Bills like FOSTA and EARN IT proclaimed important goals, but did not lay out a system to actually achieve them. The original version of the EARN IT Act in particular failed in this regard. Its goal was incredibly important: to combat the scourge of child sexual exploitation online. But its mechanism for achieving that goal was basically a punt. EARN IT told platforms’ operational teams that they would be subjected to some rules eventually – but left those rules to be determined by an unaccountable body at some later date, after the law was already in place.  The more recent EARN IT draft, which passed out of committee on July 2, is, in a way, worse. It gives up on the idea of setting clear rules for platform content moderation operations at all. Instead, it exposes platforms to an unknown array of state laws under vague standards, to be interpreted by courts at some future date — leaving companies to guess how they need to redesign their services to avoid huge civil fines or criminal prosecution.

Against this backdrop, the “Platform Accountability and Consumer Transparency Act” (the PACT Act), sponsored by Sens. Brian Schatz (D-HI) and John Thune (R-SD), is a huge step forward. That’s not to say I love it (or endorse it). There are parts I strongly disagree with. Other parts build out ideas that might be workable, but it depends on details that are not resolved yet in the draft, or that just plain need fixing. Still: This is an intellectually serious effort to grapple with the operational challenges of content moderation at the enormous scale of the Internet. That in itself makes it remarkable in today’s DC environment. We should welcome PACT as a vehicle for serious, rational debate on these difficult issues.  

Focusing on operational logistics, as PACT does, is important. But of course, to achieve its legitimate ends, the law still has to get those logistics right. In that sense, PACT has a long way to go. The process-based rules that it sets forth need a lot more tire-kicking. They should get at least the level of careful review and negotiation that the DMCA did back in 1998: a lot of meetings, among a lot of stakeholders, putting in enough time to truly hash out questions of operational detail for those harmed by online content, users accused of breaking the law, and platforms. We seem less able to have careful discussion now than we were back then. But we should be much better equipped to do it. Today we have not only a broad cadre of academic and civil society experts in intermediary liability -- many of whom signed onto these Seven Principles on point -- but also an entire field of professionals who work on content moderation or on “trust and safety” more broadly. We have the plumbers! They know how things work! If PACT’s sponsors are serious about proposing the best possible law, they should bring them in to fix this thing.  

Big picture, PACT has (1) a set of rules for platform liability for unlawful content, (2) a set of consumer protection-based rules that mostly affect platforms’ voluntary moderation under Terms of Service, and (3) a few non-binding items.

1. Rules Changing CDA 230 Immunity and Exposing Platforms to Liability for Unlawful User Content

A. The Court Order Standard: Courts should decide what’s illegal, and once they do, platforms should honor those decisions.

If you want to walk back protections of CDA 230, taking away immunity for content that platforms know was deemed illegal by a court after a fair process is the low-hanging fruit. But this “court order standard” is no panacea. On the one hand, it is subject to abuse when frivolous claimants get default judgments, or just falsify court orders. On the other, some would argue that it creates too high a bar if it lets platforms leave up highly damaging material that we think they should be able to recognize as illegal without waiting for a court to act. (Though the worst know-it-when-you-see-it illegal content is proscribed by federal criminal law, for which Section 230 does not immunize platforms anyway.) For all its flaws, though, this standard would eliminate immunity in some of the most egregious cases currently covered by 230. It is the standard that many international human rights experts, civil society groups, legislatures, and courts have arrived at after wrangling for years with the problems created by fuzzier standards.

The PACT court order standard does have some problems. First, the way the bill is drafted makes it far too easy for subsequent amendments to eliminate the court order requirement entirely. A few wording changes in the bill’s Definitions section would leave the overall law with a very problematic notice and takedown model for anything that an accuser merely alleges is illegal. (More on that in the next section.) Second, the list of harms that can be addressed by court orders is weird. PACT allows court-order based takedowns under any federal criminal or civil law, or under state defamation law. That leaves out a lot of real harms that are primarily addressed by state law, like non-consensual sexual images (“revenge porn”). Meanwhile, no one I’ve talked to is even sure what the universe of federal civil claims looks like. It seems unlikely to correspond to the online content people are most concerned about. This focus on federal civil law crops up throughout PACT, and to be honest I don’t understand why. (If platforms were interpreting the law, under notice and takedown, then the problems with exposing them to all those divergent state laws would be obvious. But courts do the interpreting under the court order standard, so diversity of state laws is not the issue.) Having a list of the specific claims PACT would authorize would really help anyone who wants to understand the law’s likely impact.

B. The Procedural Notice and Takedown Model: The law should specify a process for accusers to notify platforms about allegedly (or in PACT’s case, court-adjudicated) illegal content, for platforms to respond, and for accused speakers to defend themselves.

This is intermediary liability 101.  We already have a decent (if flawed) model for legally choreographed notice and takedown in the DMCA. Frankly, it’s embarrassing that other US legislative proposals so far have not bothered to include things like “counter-notice” opportunities for speakers targeted by takedown notices, or penalties for bad-faith accusers (who are legion). To be clear, even with procedural protections like these, notice and takedown based on private individuals’ allegations, rather than court adjudication, would be seriously problematic for many kinds of speech. Imagine if anyone alleging defamation could make platforms silence a #metoo post or remove a link to news coverage criticizing a politician, for example. 

Hecklers will always send bogus or abusive notices, platforms will always have incentives to comply, and offering appeals to victimized speakers won’t be enough to offset the problem. But any kind of process is worlds better than amorphous standards like liability for “recklessness” or for content that platforms “should” have known about. No one knows what those standards mean in practice, and no one but the biggest incumbent platforms will want to assume the risk and expense of litigating to find out. This kind of legal uncertainty hurts smaller competitors more than bigger ones, in addition to threatening the speech rights of ordinary Internet users.  

PACT’s notice and takedown process isn’t perfect. Perhaps most troubling is the requirement to take down content within 24 hours of notification -- a standard much like the one recently deemed to unconstitutionally infringe users’ expression rights under French legal standards. The PACT Act has some carve-outs to this obligation, but the bottom line is a powerful new pressure to comply, even in cases of serious uncertainty about important speech or information. That pressure is, in my opinion, too blunt an instrument. There are more granular problems in PACT’s takedown process rules, too. For example, I initially read PACT to require these notices (like DMCA notices) to spell out the exact URL or location of the illegal content. That’s pretty standard in notice and takedown systems. But on review I realized the notice can be much clumsier and broader, merely identifying the accused’s account.   

C. Empowering Federal Agency Enforcers: Platforms are not immune from federal agency enforcement of federal laws or regulations.

I believe my summary captures the PACT drafters’ intent, but to be honest, I am not 100% sure what the words mean (and I’ve heard the same confusion from others). The bill says that platforms lose immunity against enforcement “by the Federal Government’’ of any “Federal criminal or civil statute, or any regulations of an Executive agency (as defined in section 105 of title 5, United States Code) or an establishment in the legislative or judicial branch of the Federal Government.’’ I think that means that HUD, EPA, FDA, CPSM, and others can bring enforcement actions. But maybe it also opens up exposure to civil claims broadly from DOJ? Here again, we could all understand this better if we had a list of the kinds of claims at stake. My suspicion is that there are not all that many areas where (a) agencies have relevant enforcement power, and also (b) 230 would even matter as a defense. (I don’t think 230 is necessarily a defense to important HUD housing discrimination claims, for starters.) But without a list of claims, it’s hard to say.

However odd this is in practice, I think I understand the theoretical justification. Making platforms take content down based on any accuser’s legal claim has obvious problems, but waiting for a court to decide is slow and can limit access to justice for victims of real legal violations. So it’s natural to look for a compromise approach, and empowering trusted government agencies is an obvious one. (This has been done or proposed in a lot of countries and is quite controversial in some – basically where the agencies are least trusted by civil society.) If agencies don’t have the authority to require content removal for First Amendment or Administrative Procedure Act reasons, but they do have power to bring a court case, this essentially puts the “heckler’s veto” power to threaten litigation in the hands of government lawyers instead of private individuals. In principle, that’s a reasonable place to put it, and we should be able to expect them to use their power wisely. In practice, the Supreme Court has repeatedly had to stop government lawyers from telling publishers and distributors what they can say, or enable others to say. That’s precisely what the First Amendment is about. This is also a strange time in American history to consider handing this power to federal agencies, given very serious concerns about their politicization, and the President’s recent efforts to use federal authority to shape platforms’ editorial policies for user speech. 

D. Empowering State Attorney General Enforcers: State attorneys general can bring federal civil claims against platforms (if their states have analogous laws, and the state AG consults with the federal one). 

This opens up claims under any federal civil law, not just the ones agencies enforce. (Maybe this implicitly means we should interpret the agency provision above as an authorization for AG Barr to bring any federal civil claim against platforms, too?) This again seems weird, because it’s not clear why federal civil law is the relevant body of law, or why empowering State AGs to enforce it solves our most pressing problems. That’s especially the case if they will enforce easily politicized or subjective rules, like FTC Act standards for “fairness” of TOS enforcement. 

State AGs have a tendency to push for enforcement of disparate state approaches, which raises obvious problems in governing the Internet. Some also have a pretty serious history of financially or politically motivated shenanigans, including taking sides in ongoing power struggles between corporate titans in the content, tech, and telecoms industries. One state AG, for example, literally sent Google a threatening letter extensively redlined by MPAA lawyers. Wired reports similar concerns about News Corp’s role in cultivating state AG attention to Facebook in 2007. Opening up more state forum shopping for these fights under PACT, and potentially subjecting platforms to conflicting back-room political pressures from red and blue state AGs, makes me pretty uneasy.

E. Ensuring That Platforms Are Not Required to Actively Police User Speech: PACT appears to preserve this essential protection for user rights, but needs clarification.

Requiring platforms to proactively monitor their users’ communications is the third rail of intermediary liability law. In Europe, it has been the center of the biggest fights in recent years. In the U.S., making platforms actively review everything users say in search of legal violations could raise major issues under both the 1st and 4th Amendments. So far, U.S. law has steered far clear of this -- federal statutes on both copyright and child sexual abuse material, for example, expressly disclaim any monitoring requirements for intermediaries.

PACT appears to hew to this important principle with a standard immunity-is-not-conditioned-on-monitoring provision. But… it’s not entirely clear if this passage actually does the job. That’s especially the case given some troublingly fuzzy language about requiring platforms to not just take down content but also “stop… illegal activity” by users. It’s not clear what that language means, short of dispatching platforms to police users’ posts or carry out prior restraints on their speech. Getting this language buttoned up tighter will be critical if the bill moves forward. (This one is really an issue for the legal/Constitutional nerds, not the content moderation operations specialists.)

F. Regulating Consumer-Facing Edge Platforms, Not Internet Infrastructure: PACT has some limits, but needs more.  

Whatever we think the right legal obligations are for the Facebooks and YouTubes of the world, those probably are not the right obligations for Internet infrastructure providers. Companies that offer Internet access, routing, domain name resolution, content delivery networks, payment processing, and other technical or functional processing in the deeper layers of the Internet simply don’t work the same way. For one thing, they are blunt instruments. Many of them have no ability to take down just the image, post, or page that violates a law -- they can only shut off an entire website, service, or app.  

PACT takes a step toward carving these providers out of its scope, but it doesn’t go far enough. (It only carves them out to the extent that they are providing service to another 230-immunized entity.) This shouldn’t be hard to fix.

2. Rules Regarding Platforms’ Voluntary Measures to Enforce Terms of Service 

A. The Consumer Protection Model: If platforms are going to enforce private speech rules under their Terms of Service, they should state the rules clearly and enforce them consistently. Failure to do so is a consumer protection harm, like a bait-and-switch or a failure to label food correctly.

[Update July 17: Staff involved in the bill tell me that the intent of this section is for the FTC to enforce failures of process in TOS enforcement (like failure to offer appeals, publish transparency reports, etc.), not for the FTC to determine which substantive outcomes are correct under the platform's TOS. That's a meaningful difference and would mitigate some of the First Amendment concerns I mention below. Clarification in the text about this (especially re the FTC assessing "appropriate" steps by the platform) would help, since most people I consulted with about the bill and this post seemed to read (or mis-read) the provision the same way I did.]

Consumer protection provides a useful framing, and one that critics on the left and right can often agree on. European regulators used consumer protection law to reach agreements about TOS enforcement with platforms in 2018, and ideas along these lines keep coming up inside and outside the United States. At a (very) high level, I like the idea that platforms should have to make clear commitments to users, and uphold them.  But there are very real operational and constitutional issues to be resolved. 

As a practical matter, I have a lot of questions. Exactly how much detail can we reasonably demand from platforms explaining their rules – is it enough if they provide the same level of detail as state legal codes, for example? Are they supposed to notify users every time some new form of Internet misbehavior crops up and prompts them to update the rules (which happens all the time)? How far do we want to go in having courts or regulators second-guess platforms in hard judgment calls? Speaking of those courts or regulators, how would we possibly staff them for the inevitable deluge of disputes? (I think PACT’s answer is that the FTC brings the occasional enforcement action but isn’t required to handle any particular complaint.)

Then there are the constitutional issues. If a regulator rejects the platform’s interpretation of its own rules in a hard case, is it essentially overriding platform editorial policy, and does that violate platforms’ First Amendment rights? Is the government essentially picking winners and losers among lawful user posts, and does that violate the users’ First Amendment rights? Even without activist court or agency interpretations, is it a problem generally to use consumer protection law to restrict what are essentially editorial choices? (This really isn’t the same as food labeling, for example. As the Supreme Court explained in rejecting a similar analogy years ago, “[t]here is no specific constitutional inhibition against making the distributors of food the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller.”) 

For all my sympathy for this approach, many of the things that trouble me most about PACT are in these sections. This is one place where the bill could most benefit from careful review by the plumbers I mentioned. I am not an operations specialist by Silicon Valley standards, and I did not try to vet every aspect of this proposal. But even I can see a lot of issues.  The bill requires many platforms to offer call centers, for example, not just the web forms and online communications typically used today. A noted trust and safety expert, Dave Willner, said in a panel I attended that this would lead to worse and slower outcomes for most users trying to solve real problems. He concluded that “you’d be better off taking the cash this would cost and burning it for heat.” (That’s a paraphrase. In spirit, it is the same thing I’ve heard from a lot of people.)  PACT also requires 14-day turnaround time for responding to notifications, which sounds good in theory but in practice may be truly difficult for small platforms facing hard judgment calls or sudden increases in traffic, notifications, or abuse. Even for larger platforms, a standard like this could force them to prioritize recent notices at the expense of ones that may be more serious (identifying more harmful content) or accurate (coming from a source with a good track record).  

B. Transparency Reports: Making platforms (or platforms over a certain size) publish aggregate data about content moderation, in a format that permits meaningful comparison between platforms.

I am a huge fan of transparency. Without it, we will stay where we are now: people on all sides of the platform content moderation issues will keep slinging anecdotes like mud. The ones with more power and media access will get more attention to their anecdotes. That’s a terrible basis for lawmaking. If lawmakers have the power to get the facts first and legislate second, we will all be better off.

That said, tracking more detail about the grounds for every content demotion or removal; what kind of entity sent the notice; what rule it violated for what reason; the role of automation; whether there was an appeal; etc. all add up to a fair amount of work – especially for smaller companies. They can’t track everything. The challenge is orders of magnitude bigger for transparency about what PACT calls “deprioritization” of content. Google, for example, adjusts its algorithm some 500-600 times per year, affecting literally trillions of possible search outcomes. It’s not clear what meaningful and useful transparency about that even looks like. Those of us who advocate for transparency should be smart about what precise information we ask for, so we get the optimal bang for our societal buck (and so we don’t fail to ask for something that will later turn out to matter a lot). Right now, although people like me put out lists of possible asks, and researchers who have been trying and too often failing to get important information from platforms often have very specific critiques of current transparency measures, we don't really have an informed consensus on what the priorities should be. So here, too, I say: bring on the plumbers, including both content moderation professionals and outside researchers.   

There’s a constitutional question here, too, though -- and it might be a really big one. Could transparency requirements be unconstitutional compelled speech (as the 4th Circuit recently found campaign ad transparency requirements for hosts were in Washington Post v. McManus)? Would they be like making the New York Times justify every decision to accept or reject a letter to the editor, or a wedding announcement? I haven't tried to answer that question. But we'll need to if this or other transparency legislation efforts move forward.

3. Non-Binding (I hope) Items

A. Considering future whistleblower protections:  The Government Accountability Office is directed to issue a report on the idea of protections and awards for platform employees who disclose “violations of consumer protection,” meaning improper TOS enforcement in content moderation.

I love whistleblowers. I even represented them at one point. But I shudder to think how politically loaded this particular kind of “whistleblowing” will be. The idea that a dissatisfied employee can bring ideologically grounded charges to whichever FTC Commission offices are staffed by members of his or her political party makes me want to hide in a cave. As one former platform employee told me, “bounties for selective leaking of stylized evidence against teammates is an episode of Black Mirror I’d be too scared to watch." This scenario is enough to make me wonder if those First Amendment concerns I touched on above – the ones about a government agency looking for opportunities to effectively dictate platform editorial policy in the guise of interpreting the platform’s own rules – are actually a really, really big deal. But… in any case, this whistleblower provision isn’t anything mandatory, for now.

B. A voluntary standards framework. The National Institute of Standards and Technology is directed to convene experts and issue non-binding guidelines on topics like information-sharing and use of automation.

This shares a suspicious resemblance with the “best practices” in EARN IT. Those were nominally not required in that bill’s original draft (but were in fact a prerequisite for preserving immunity). They are even more not required in EARN IT’s current draft (unless they become de facto standards for liability under the raft of state laws EARN IT unleashes). Perhaps I am naïve, but I am less worried about the voluntary standards proposed in PACT. For one thing, they won’t be crafted by nominees put in place by a who’s-who of DC heavy-hitters, as EARN IT’s would be. And the specific topics listed in the PACT Act – like developing technical standards to authenticate court orders – don’t all look like hooks for liability, like the ones in EARN IT. Most importantly, though, because PACT does not open platforms up to a flood of individual allegations under vague state laws, it leaves fewer legal blanks to be filled in by things like “voluntary” best practices or standards. Of course, the NIST standards might still come into play under PACT for courts assessing agency or AG enforcement of federal laws (or for platforms deciding whether to do what courts and AGs demand behind closed doors, to avoid going to court). So I may come to regret my optimism in calling them non-binding.   

Conclusion

I can’t tell you what to think about PACT. That’s in part because I am still trying to understand some of its key provisions. (What are these federal civil laws it talks about? Will DOJ be enforcing them? What are the logistics and First Amendment ramifications of its FTC consumer protection model for TOS-based content moderation?) But it’s also in part because its core ideas are things where reasonable minds might differ. It’s not disingenuous nonsense, and it’s not a list of words that sound plausible on paper but that legal experts know are meaningless or worse. It’s a list of serious ideas, imperfectly executed. If you like any of them, you should be rooting for lawmakers to do the work to figure out how to refine them into something more operationally feasible. You should be calling on lawmakers to bring in the plumbers.   

All News button
1
Paragraphs

Concluding Chapter of Social Media and Democracy: The State of the Field and Prospects for Reform (Cambridge Press, forthcoming September 2020)

Nathaniel Persily and Joshua A. Tucker

To some extent, it has been the best of times and the worst of times when it comes to social media research. As the first half of this book reveals, we are beginning to gain important insights into the dynamics of the communication revolution underway. However, despite these achievements and the widely recognized importance of this research, unique constraints have hindered the necessary concerted academic effort to answer the most important empirical questions. The key social media datasets to answer these important questions are not as readily available as were politically relevant datasets of years past. Moreover, unique legal barriers prevent analysis of such data, and related ethical and privacy concerns have arisen that have chilled academic inquiry...

For the full chapter, download below:

All Publications button
1
Publication Type
Books
Publication Date
Authors
-

Image
Avi Tuschman, Adam Berinsky, David Rand

Please join the Cyber Policy Center for Exploring Potential “Solutions” to Online Disinformation​, hosted by Cyber Policy Center's Kelly Born, with guests Adam Berinsky, Mitsui Professor of Political Science at MIT and Director of the MIT Political Experiments Research Lab (PERL) at MIT, David Rand, Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, and Director of the Human Cooperation Laboratory and the Applied Cooperation Team at MIT, and Avi Tuschman, Founder & CIO, Pinpoint Predictive. The session is open but registraton is required.

Adam Berinsky is the Mitsui Professor of Political Science at MIT and serves as the director of the MIT Political Experiments Research Lab (PERL). He is also a Faculty Affiliate at the Institute for Data, Systems, and Society (IDSS). Berinsky received his PhD from the University of Michigan in 2000. He is the author of "In Time of War: Understanding American Public Opinion from World War II to Iraq" (University of Chicago Press, 2009). He is also the author of "Silent Voices: Public Opinion and Political Participation in America" (Princeton University Press, 2004) and has published articles in many journals. He is currently the co-editor of the Chicago Studies in American Politics book series at the University of Chicago Press. He is also the recipient of multiple grants from the National Science Foundation and was a fellow at the Center for Advanced Study in the Behavioral Sciences.

David Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. Bridging the fields of behavioral economics and psychology, David’s research combines mathematical/computational models with human behavioral experiments and online/field studies to understand human behavior. His work uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, perceived accuracy of false or misleading news stories, political preferences, and the dynamics of social media platform behavior. 

Avi Tuschman is a Stanford StartX entrepreneur and founder of Pinpoint Predictive, where he currently serves as Chief Innovation Officer and Board Director. He’s spent the past five years developing the first Psychometric AI-powered data-enrichment platform, which ranks 260 million individuals for performance marketing and risk management applications. Tuschman is an expert on the science of heritable psychometric traits. His book and research on human political orientation have been covered in peer-reviewed and mainstream media from 25 countries. Previous to his career in tech, he advised current and former heads of state as well as multilateral development banks in the Western Hemisphere. Tuschman completed his undergraduate and doctoral degrees in evolutionary anthropology at Stanford.

News Type
Q&As
Date
Paragraphs

Image
Marietje Schaake

 

  

DOWNLOAD THE PAPER 

 

The European Union is often called a ‘super-regulator’, especially when it comes to data-protection and privacy rules. Having seen European lawmaking from close by, in all its complexities, I have often considered this qualification an exaggerated one. Yes, the European Union frequently takes the first steps in ensuring principles continue to be protected, even as digitization disrupts. However, the speed with which technology evolves versus the pace of democratic lawmaking leads to perpetual mismatches.  

Even the famous, or infamous, General Data Protection Regulation does not meet many essential regulatory needs of the moment. The mainstreaming of Artificial Intelligence in particular, poses new challenges to concepts of the protection of rights and the sustaining of the rule of law. In its White Paper on Artificial Intelligence, as well the Data Strategy, the European Commission references to the common good and the public interest, as well as societal needs as opposed to an emphasis on regulating the digital market. These are welcome steps in acknowledging the depth and scope of technological impact and defining harms not just in economic terms. It remains to be seen how the visions articulated in the White Paper and the Strategy, will translate into concrete legislation. 

One proposal to make concrete improvements to legal frameworks, is outlined by Martin Tisné in The Data Delusion. He highlights the need to update legal privacy standards to be more reflective of the harms incurred through collective data analysis, as opposed to individual privacy violations. Martin makes a clear case for addressing the discrepancy between the profit models benefitting from grouped data versus the ability of any individual to prove the harms caused to his or her rights. 

The lack of transparency into the inner workings of algorithmic processing of data further hinders the path to much needed accountability of the powerful technology businesses operating growing parts of our information architecture and the data flows they process.  

While EU takes the lead in setting values-based standards and rules for the digital layer of our societies and economies, a lot of work remains to be done. 

Marietje Schaake: Martin, in your paper you address the gap between the benefits for technology companies through collective data processing, and the harms for society. You point to historic reasons for individual privacy protections in European laws. Do you consider the European Union to be the best positioned to address the legal shortcomings, especially as you point out that some opportunities to do so were missed in the GDPR?

Martin Tisné: Europe is well positioned but perhaps not for the reasons we traditionally think of (strong privacy tradition, empowered regulators). Individual privacy alone is a necessary, but not sufficient foundation stone to build the future of AI regulation. And whilst much is made of European regulators, the GDPR has been hobbled by the lack of funding and capacity of data protection commissioners across Europe. What Europe does have though, is a legal, political and societal tradition of thinking about the public interest, the common good and how this is balanced against individual interests. This is where we should innovate, taking inspiration from environmental legislation such as the Urgenda Climate Case against the Dutch Government which established that the government had a legal duty to prevent dangerous climate change, in the name of the public interest. 

And Europe also has a lot to learn from other political and legal cultures. Part of the future of data regulation may come the indigenous data rights movement, with greater emphasis on the societal and group impacts of data, or from the concept of Ubuntu ethics that assigns community and personhood to all people. 

Schaake: What scenario do you foresee in 10 years if collective harms are not dealt with in updates of laws? 

Tisné: I worry we will see two impacts. The first is a continuation of what we are seeing now: negative impacts of digital technologies on discrimination, voting rights, privacy, consumers. As people become increasingly aware of the problem there will be a corresponding increase in legal challenges. We’re seeing this already for example with the Lloyd class action case against Google for collecting iPhone data. But I worry these will fail to stick and have lasting impact because of the obligation to have these cases turn on one person, or a class of people’s, individual experiences. It is very hard for individuals to seek remedy for collective harms, as opposed to personal privacy invasions. So unless we solve the issue I raise in the paper – the collective impact of AI and automation – these will continue to fuel polarization, discrimination on the basis of age, gender (and many other aspects of our lives) and the further strengthening of populist regimes. 

I also worry about the ways in which algorithms will optimize on the basis of seemingly random classifications (e.g. “people who wear blue shirts, get up early on Saturday mornings, and were geo-located in a particular area of town at a particular time”). These may be proxies for protected characteristics (age, gender reassignment, disability, race, religion, sex, marriage, pregnancy/maternity, sexual orientation) and provide grounds for redress. They may also not be and sow the seeds of future discrimination and harms. Authoritarian rulers are likely to take advantage of the seeming invisibility of those data-driven harms to further silence their opponents. How can I protect myself if I don’t know the basis on which I am being discriminated against or targeted? 

Schaake: How do you reflect on the difference in speed between technological innovations and democratic lawmaking? Some people imply this will give authoritarian regimes an advantage in setting global standards and rules. What are your thoughts on ensuring democratic governments speed up? 

Tisné: Democracies cannot afford to be outpaced by technological innovation and constantly be fighting yesterday’s wars. Our laws have not changed to reflect changes in technology, which extracts value from collective data, and need to catch up.  A lot of the problems stem from the fact that in government (as in companies), the people responsible for enforcement are separated from those with the technical understanding. The solution lies in much better translation between technology, policy and the needs of the public.  

An innovation and accountability-led government must involve and empower the public in co-creating policies, above and beyond the existing rules that engage individuals (consent forms etc.). In the paper I propose a Public Interest Data Bill that addresses this need: the rules of the digital highway used as a negotiation between the public and regulators, between private data consumers and data generators. Specifically: clear transparency, public participation and realistic sanctions when things go wrong.

This is where democracies should hone their advantage over authoritarian regimes – using such an approach as the basis for setting global standards and best practices (e.g. affected communities providing input into algorithmic impact assessments). 

Schaake: The protection of privacy is what sets democratic societies apart from authoritarian ones. How likely is it that we will see an effort between democracies to set legal standards across borders together? Can we overcome the political tensions across the Atlantic, and strengthen democratic alliances globally?

Tisné: I remain a big supporter of international cooperation. I helped found the Open Government Partnership ten years ago, which remains the main forum for 79 countries to develop innovative open government reforms jointly with the public. Its basic principles hold true: involve global south and global north countries with equal representation, bring civil society in jointly with government from the outset, seek out and empower reformers within government (they exist, regardless of who is in power in the given year), and go local to identify exciting innovations. 

If we heed those principles we can set legal standards by learning from open data and civic technology reforms in Taiwan, experiments with data trusts in India, legislation to hold algorithms accountable in France; and by identifying and working with the individuals driving those innovations, reformers such as Audrey Tang in Taiwan, Katarzyna Szymielewicz in Poland, and Henri Verdier in France. 

These reformers need a home, a base to influence policymakers and technologists, to get those people responsible for enforcement working with those with the technical understanding. The Global Partnership on Artificial Intelligence may be that home but these are early days, it needs to be agile enough to work with the private sector, civil society as well as governments and the international system. I remain hopeful. 

 

 

All News button
1
Subtitle

Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.

-

Image
The rise of digital authoritarianism banner advertisement

There will be four events, with the first on September 29th; all dates listed below

REGISTER 

  • September 29th, 9-11am PST
  • October 1st, 9-11am PST
  • October 6th, 9-11am PST
  • October 9th, 9-11am PST

 

 

The Rise of Digital Authoritarianism: China, AI and Human Rights

Day 1- September 29, 2020

Welcome Remarks

Larry Diamond | Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

Glenn Tiffert | Research Fellow, Hoover Institution

Jenny Wang | Strategic Advisor, Human Rights Foundation

Opening Remarks

Condoleezza Rice | Director, Hoover Institution, Former U.S. Secretary of State, Denning Professor in Global Business at the Graduate School of Business

 

Panel 1: How AI is powering China's Domestic Surveillance State - How is AI exacerbating surveillance risks and enabling digital authoritarianism? This session will examine both state-sponsored applications and Chinese commercial services.

Panelists

Bethany Allen-Ebrahimian | China Reporter, Axios

Paul Mozur | Asia Technology Correspondent, New York Times

Glenn Tiffert | Research Fellow, Hoover Institution

Xiao Qiang | UC Berkeley & Editor-in-Chief, China Digital Times

Moderator

Melissa Chan | Foreign Affairs Reporter, Deutsche Welle Asia

 

Day 2- October 1, 2020

Panel 2: The Ethics of Doing Business with China and Chinese Companies

Eric Schmidt | Former Executive Chairman and CEO, Google//Co-Founder, Schmidt Futures
Conversant: Eileen Donahoe, Executive Director of GDPI

 

Panel 2: The Ethics of Doing Business with China and Chinese Companies - What dynamics are at play in China's effort to establish market dominance for Chinese companies, both domestically and globally? What demands are placed on non-Chinese technology companies to participateWhat dynamics are at play in China's effort to establish market dominance for Chinese companies, both domestically and globally? What demands are placed on non-Chinese technology companies to participate in the Chinese marketplace? What framework should U.S.-based companies use to evaluate the risks and opportunities for collaboration and market entry in China? To what extent are Chinese companies (e.g..,TikTok) competing in Western markets required to comply with Chinese government instructions or demands for access to data?

Panelists

Mary Hui | Hong Kong-based Technology and Business Reporter, Quartz
 
Megha Rajagopalan | International Correspondent and Former China Bureau Chief, Buzzfeed News
 

Alex Stamos | Director, Stanford Internet Observatory & Former Chief Security Officer, Facebook

Moderator

Casey Newton | Silicon Valley Editor, The Verge

 

Day 3- October 6, 2020

Panel 3: China as an Emerging Global AI Superpower

Keynote & Conversation

Competing in the Superpower Marathon with China

Mike Brown | Director, Defense Innovation Unit

Conversant: Larry Diamond, Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

Panel 3: China as an Emerging Global AI Superpower- How should we think about China's growing influence in the realm of AI and the attendant geopolitical risks and implications? This session will explore China’s bid through Huawei to build and control the world's 5G networks, and what that implies for human rights and national sovereignty and security; China's export of surveillance technology to authoritarian regimes around the world; China's global partnerships to research and develop AI; and the problem of illicit technology transfer/theft.

Panelists

Steven Feldstein | Senior Fellow, Carnegie Endowment for International Peace 

Lindsay Gorman | Fellow for Emerging Technologies, Alliance for Securing Democracy, German Marshall Fund 

Maya Wang | China Senior Researcher, Human Rights Watch

Moderator

Dominic Ziegler | Senior Asia Correspondent and Banyan Columnist, The Economist

 

Day 4- October 9, 2020

Panel 4: How Democracies Should Respond to China’s Emergence as an AI Superpower

Keynote

Digital Social Innovation: Taiwan Can Help

Audrey Tang | Digital Minister, Taiwan

Panel 4: How Democracies Should Respond to China's Emergence as an AI Superpower- How should the rest of the world, and especially the world's democracies, react to China's bid to harness AI for ill as well as good? How do we strike the right balance between vigilance in defense of human rights and national security and xenophobic overreaction?

Panelists

Christopher Balding | Associate Professor, Fulbright University Vietnam

Anja Manuel | Co-Founder, Rice, Hadley, Gates & Manuel

Chris Meserole | Deputy Director of the Artificial Intelligence and Emerging Technology Initiative, Brookings Institution

Moderator

Larry Diamond | Senior Fellow, Hoover Institution and FSI, Principal Investigator, Global Digital Policy Incubator

 

Closing Keynote & Conversation

Strengthening Human-Centered Artificial Intelligence

Fei-Fei Li | Co-Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI) Conversant: Eileen Donahoe, Executive Director of GDPi

Closing Remarks: Alex Gladstein & Eileen Donahoe

Seminars
-

Image
towards cyber peace

Please join the Cyber Policy Center for Towards Cyber Peace, Closing the Accountability Gap, hosted by Cyber Policy Center's Marietje Schaake, along with guests Stéphane Duguin, CEO of the Cyber Peace Institute and Camille François, CIO of Graphika and Mozilla Fellow. The discussion will focus on the challenges to cyber peace, and the work being done to chart a path forward. The session is open to the public, but registration is required. 

Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. Marietje is affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India and writes a monthly column for the Financial Times and a bi-monthly column for the Dutch NRC newspaper. 

Camille François works on cyber conflict and digital rights online. She is the Chief Innovation Officer at Graphika, where she leads the company’s work to detect and mitigate disinformation, media manipulation and harassment. Camille was previously the Principal Researcher at Jigsaw, an innovation unit at Google that builds technology to address global security challenges and protect vulnerable users. Camille has advised governments and parliamentary committees on both sides of the Atlantic on policy issues related to cybersecurity and digital rights. She served as a special advisor to the Chief Technology Officer of France in the Prime Minister’s office, working on France’s first Open Government roadmap. Camille is a Mozilla Fellow, a Berkman-Klein Center affiliate, and a Fulbright scholar. She holds a masters degree in human rights from the French Institute of Political Sciences (Sciences-Po) and a masters degree in international security from the School of International and Public Affairs (SIPA) at Columbia University. François’ work has been featured in various publications, including the New York Times, WIRED, Washington Post, Bloomberg Businessweek, Globo and Le Monde.

Stéphane Duguin is the Chief Executive Officer of the CyberPeace Institute. His mission is to coordinate a collective response to decrease the frequency, impact, and scale of cyberattacks by sophisticated actors. Building on his hands-on experience in countering and analyzing cyber operations and information operations which impact civilians and civilian infrastructure, he leads the Institute with the aim of holding malicious actors to account for the harms they cause. Prior to this position, Stéphane Duguin was a senior manager and innovation coordinator at Europol. He led key operational projects to counter both cybercrime and online terrorism, such as the setup of the European Cybercrime Centre (EC3), the Europol Innovation Lab, and the European Internet Referral Unit (EU IRU). A leader in digital transformation, his work focused on the implementation of innovative responses to a large-scale abuse of the cyberspace, notably on the convergence of disruptive technologies and public-private partnerships.

 

News Type
Blogs
Date
Paragraphs

From the Stanford Institute for Human-Centered AI (HAI) blog:

More than 25 governments around the world, including those of the United States and across the European Union, have adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.

Yet a new analysis finds that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.

That’s a mistake, says Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator, which produced the report in conjunction with a leading international digital rights organization called Global Partners Digital.

Read More (at the HAI blog)

Hero Image
Global Flags
All News button
1
Subtitle

In the rush to develop national strategies on artificial intelligence, a new report finds, most governments pay lip service to civil liberties.

-

Join Cyber Policy Center, June 17rd at 10am Pacific Time for Patterns and Potential Solutions to Disinformation Sharing, Under COVID-19 and Beyond, with Josh Tucker, David Lazer and Evelyn Douek.

The session will explore which types of readers are most susceptible to fake news, whether crowdsourced fact-checking by ordinary citizens works and whether it can reduce the prevalence of false news in the information ecosystem. Speakers will also look at patterns of (mis)information sharing regarding COVID-19: Who is sharing what type of information? How has this varied over time? How much misinformation is circulating, and among whom? Finally, we'll explore how social media platforms are responding to COVID disinformation, how that differs from responses to political disinformation, and what we think they could be doing better.

Evelyn Douek is a doctoral candidate and lecturer on law at Harvard Law School, and Affiliate at the Berkman Klein Center For Internet & Society. Her research focuses on online speech governance, and the various private, national and global proposals for regulating content moderation.

David Lazer is a professor of political science and computer and information science and the co-director of the NULab for Texts, Maps, and Networks. Before joining the Northeastern faculty in fall 2009, he was an associate professor of public policy at Harvard’s John F. Kennedy School of Government and director of its Program on Networked Governance. 

Joshua Tucker is Professor of Politics, Director Jordan Center for the Advanced Study of Russia, Co-Director NYU Social Media and Political Participation (SMaPP) lab, Affiliated Professor of Russian and Slavic Studies and Affiliated Professor of Data Science.

The event is open to the public, but registration is required.

Online, via Zoom

-

On Thursday, President Trump signed an executive order threatening to revoke CDA 230 protections, which would expose social media companies to increased liability for content that is posted on their sites. This comes on the heels of Twitter, last week, fact-checking two misleading tweets from the president about mail-in voting. Critics of the executive order say the White House is overstepping its authority, and cannot limit the legal protections that social media companies currently hold under federal law.
 
Join the Stanford Cyber Policy Center's team Monday June 1 at 8AM PST for President Trump’s Executive Order on Platforms and Online Speech: Stanford’s Cyber Policy Center Responds, with Nate Persily, Faculty Co-Director of the Cyber Policy Center and Director of the Program on Democracy and the Internet; Daphne Keller, Director for the Program on Platform Regulation and former associate general counsel for Google; Alex Stamos, Director of the Cyber Center’s Internet Observatory and former Chief Security Officer at Facebook; Marietje Schaake, Policy Director for the Cyber Policy Center and former Member of EU Parliament; and Eileen Donahoe, Executive Director of the Global Digital Policy Incubator and former US Ambassador to the UN Human Rights Counsel, in conversation with Cyber Center Director Kelly Born.

Monday, June 1st
8am PDT
Join via Zoom

Panel Discussions
-

This event is co-sponsored with the Cyber Policy Center and the Center for a New American Security.

* Please note all CISAC events are scheduled using the Pacific Time Zone

 

Seminar Recording: https://youtu.be/KaydMdIVtGc

 

About the Event: The United States is steadily losing ground in the race against China to pioneer the most important technologies of the 21st century. With technology a critical determinant of future military advantage, a key driver of economic prosperity, and a potent tool for the promotion of different models of governance, the stakes could not be higher. To compete, China is leveraging its formidable scale—whether measured in terms of research and development expenditures, data sets, scientists and engineers, venture capital, or the reach of its leading technology companies. The only way for the United States to tip the scale back in its favor is to deepen cooperation with allies. The global diffusion of innovation also places a premium on aligning U.S. and ally efforts to protect technology. Unless coordinated with allies, tougher U.S. investment screening and export control policies will feature major seams that Beijing can exploit.

On early June, join Stanford's Center for International Security and Cooperation (CISAC) and the Center for a New American Security (CNAS) for a unique virtual event that will feature three policy experts advancing concrete ideas for how the United States can enhance cooperation with allies around technology innovation and protection.

This webinar will be on-the-record, and include time for audience Q&A.

 

About the Speakers: 

Anja Manuel, Stanford Research Affiliate, CNAS Adjunct Senior Fellow, Partner at Rice, Hadley, Gates & Manuel LLC, and author with Pav Singh of Compete, Contest and Collaborate: How to Win the Technology Race with China.

 

Daniel Kliman, Senior Fellow and Director, CNAS Asia-Pacific Security Program, and co-author of a recent report, Forging an Alliance Innovation Base.

 

Martijn Rasser, Senior Fellow, CNAS Technology and National Security Program, and lead researcher on the Technology Alliance Project

Virtual Seminar

Anja Manuel, Daniel Kliman, and Martijn Rasser
Seminars
Subscribe to Security