Authors
Charles Mok
Kenny Huang
Kenny Huang
News Type
News
Date
Paragraphs

In new work, Global Digital Policy Incubator (GDPi) Research Scholar, Charles Mok, along with Kenny Huang, a leader in Asia’s internet communities, examine Taiwan’s reliance on fragile external systems and how that reliance exposes Taiwan to threats like geopolitical conflicts, cyberattacks and natural disasters. The key, write Mok and Huang, is strengthening governance, enhancing investment, and fostering international cooperation in order to secure a resilient future.

For more, read the full paper, out now and free to download.

Read More

collage of images at the cyber policy center, people testifying, people at events
News

Agenda for the Trust & Safety Research Conference is now Live!

Speaker line-up for the third annual Trust & Safety Research Conference announced.
cover link Agenda for the Trust & Safety Research Conference is now Live!
Skyline of Taipei at dawn.
Blogs

Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency

Through the Policy Change Studio, students partner with international organizations to propose policy-driven solutions to new digital challenges.
cover link Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency
All News button
1
Subtitle

A new paper from Charles Mok of GDPi examines the current landscape of Taiwan’s Internet Infrastructure

Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

The agenda for the 2024 Trust & Safety Research Conference is now available. The conference includes two packed days of lightning talks, research presentations, panels, workshops and a poster session. The conference has an amazing lineup of speakers, including keynote speakers Camille François (Associate Professor of the Practice of International and Public Affairs, Columbia University) and Arvind Narayanan (Professor of Computer Science and Director of the Center for Information Technology Policy, Princeton University.)

The Trust & Safety Research Conference convenes a diverse group of academics, researchers, and practitioners from fields including computer science, sociology, law, and political science. It features networking opportunities including happy hours, and complimentary breakfast and lunch are provided on both days.

Register now and save a spot before early bird pricing ends on August 1.

More details on the conference website

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
cover link 3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
cover link Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
cover link The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Speaker line-up for the third annual Trust & Safety Research Conference announced.

Date Label
Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

Registration is now open for the third annual Trust & Safety Research Conference at Stanford University from September 26-27, 2024. Join us for two days of cross-professional presentations and conversations designed to push forward research on trust and safety.

Hosted at Stanford University’s Frances. C. Arrillaga Alumni Center, the Trust & Safety Research Conference convenes participants working on trust and safety issues across academia, industry, civil society, and government. The event brings together a cross-disciplinary group of academics and researchers in fields including computer science, sociology, law, and political science to connect with practitioners and policymakers on challenges and new ideas for studying and addressing online trust and safety issues.

Your ticket provides access to:

  • Two days of talks, panels, workshops and breakouts
  • Breakfast and lunch both days of the conference
  • Networking opportunities, including happy hours and poster sessions

Early bird tickets are $150 for attendees from academia, civil society and government, and $600 for attendees from industry. Ticket prices go up August 1, 2024.

CONFERENCE WEBSITE • REGISTER

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
cover link 3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
cover link Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
cover link The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Tickets on sale for the third annual Trust & Safety Research Conference to be held September 26-27, 2024. Lock in early bird prices by registering before August 1.

Authors
Clifton B. Parker
News Type
News
Date
Paragraphs

While the potential benefits of artificial intelligence are significant and far-reaching, AI’s potential dangers to the global order necessitates an astute governance and policy-making approach, panelists said at the Freeman Spogli Institute for International Studies (FSI) on May 23.

An alumni event at the Ford Dorsey Master’s in International Policy (MIP) program featured a panel discussion on “The Impact of AI on the Global Order.” Participants included Anja Manuel, Jared Dunnmon, David Lobell, and Nathaniel Persily. The moderator was Francis Fukuyama, Olivier Nomellini senior fellow at FSI and director of the master’s program.

Manuel, an affiliate at FSI’s Center for International Security and Cooperation and executive director of the Aspen Strategy Group, said that what “artificial intelligence is starting to already do is it creates superpowers in the way it intersects with other technologies.”

An alumna of the MIP program, Manuel noted an experiment a year ago in Switzerland where researchers asked an AI tool to come up with new nerve agents – and it did very rapidly, 40,000 of them. On the subject of strategic nuclear deterrence, AI capabilities may upend existing policy approaches. Though about 30 countries have voluntarily signed up to follow governance standards in how AI would be used in military conflicts, the future is unclear.

“I worry a lot,” said Manuel, noting that AI-controlled fighter jets will likely be more effective than human-piloted craft. “There is a huge incentive to escalate and to let the AI do more and more and more of the fighting, and I think the U.S. government is thinking it through very carefully.”
 


AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.
Nathaniel Persily
Co-director of the Cyber Policy Center


Geopolitical Competition


Dunnmon, a CISAC affiliate and senior advisor to the director of the Defense Innovation Unit, spoke about the “holistic geopolitical competition” among world powers in the AI realm as these systems offer “unprecedented speed and unprecedented scale.”

“Within that security lens, there’s actually competition across the entirety of the technical AI stack,” he said.

Dunnmon said an underlying security question involves whether a given AI software is running on top of libraries that are sourced from Western companies then if software is being built on top of an underlying library stack owned by state enterprises. “That’s a different world.”

He said that “countries are competing for data, and it’s becoming a battlefield of geopolitical competition.”

Societal, Environmental Implications


Lobell, a senior fellow at FSI and the director of the Center for Food Security and the Environment, said his biggest concern is about how AI might change the functioning of societies as well as possible bioterrorism.

“Any environment issue is basically a collective action problem, and you need well-functioning societies with good governance and political institutions, and if that crumbles, I don’t think we have much hope.”

On the positive aspects of AI, he said the combination of AI and synthetic biology and gene editing are starting to produce much faster production cycles of agricultural products, new breeds of animals, and novel foods. One company found how to make a good substitute for milk if pineapple, cabbage and other ingredients are used.

Lobell said that AI can understand which ships are actually illegally capturing seafood, and then they can trace that back to where they eventually offload such cargo. In addition, AI can help create deforestation-free supply chains, and AI mounted on farm tractors can help reduce 90% of the chemicals being used that pose environmental risks.

“There’s clear tangible progress being made with these technologies in the realm of the environment, and we can continue to build on that,” he added.
 


Countries are competing for data, and it’s becoming a battlefield of geopolitical competition.
Jared Dunnmon
Affiiate at the Center for International Security and Cooperation (CISAC)


AI and Democracy


Persily, a senior fellow and co-director of FSI’s Cyber Policy Center, said, “AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.”

He noted, “AI is not social media,” even though it can interact with social media. Persily said AI is so much more pervasive and significant than a given platform such as Facebook. Problems arise in the areas of privacy, antitrust, bias and disinformation, but AI issues are “characteristically different” than social media.

“One of the ways that AI is different than social media is the fact that they are open-source tools. We need to think about this in a little bit of a different way, which is that it is not just a few companies that can be regulated on closed systems,” Persily said.

As a result, AI tools are available to all of us, he said. “There is the possibility that some of the benefits of AI could be realized more globally,” but there are also risks. For example, in the year and a half since OpenAI released ChatGPT, which is open sourced, child pornography has multiplied on the Internet.

“The democratization of AI will lead to fundamental challenges to establish legacy infrastructure for the governance of the propagation of content,” Persily said.

Balance of AI Power


Fukuyama pointed out that an AI lab at Stanford could not afford leading-edge technology, yet countries such as the U.S. and China have deeper resources to fund AI endeavors.

“This is something obviously that people are worried about,” he said, “whether these two countries are going to dominate the AI race and the AI world and disadvantage everybody.”

Manuel said that most of AI is now operating with voluntary governance – “patchwork” – and that dangerous things involving AI can be done now. “In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.” 

Lobell said that while it might seem universities can’t stay up to speed with industry, people have shown they can reproduce those models’ performances just days after their releases.
 


In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.
Anja Manuel
Affiiate at the Center for International Security and Cooperation (CISAC)


On regulation — the European Union is currently weighing legislation — Persily said it would be difficult to enforce regulations and interpret risk assessments, so what is needed is a “transparency regime” and an infrastructure so civil entities have a clear view on what models are being released – yet this will be complex.

“I don’t think we even really understand what a sophisticated, full-on AI audit of these systems would look like,” he said.

Dunnmon suggested that an AI governance entity could be created that’s similar to how the U.S. Food and Drug Agency reviews pharmaceuticals before release.

In terms of AI and military conflicts, he spoke about the need for AI and humans to understand the rewards and risks involved, and in the case of the latter, how the risk compares to the “next best option.”

“How do you communicate that risk, how do you assess that risk, and how do you make sure the right person with the right equities and the right understanding of those risks is making that risk trade-off decision?” he asked.



The Ford Dorsey Master’s in International Policy program was established in 1982 to provide students with the knowledge and skills necessary to analyze and address complex global challenges in a rapidly changing world, and to prepare the next generation of leaders for public and private sector careers in international policymaking and implementation.

Read More

AI
News

Research can help to tackle AI-generated disinformation

New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.
cover link Research can help to tackle AI-generated disinformation
The Right Honorable Jacinda Ardern and a delegation from the Christchirch Call joined Stanford researchers at the Freeman Spogli Institute for International Studies for a roundtable discussion on technology governance and regulation.
News

Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation

Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern, a delegation from the Christchurch Call joined Stanford scholars to discuss how to address the challenges posed by emerging technologies.
cover link Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation
All News button
1
Subtitle

At a gathering for alumni, the Ford Dorsey Master's in International Policy program hosted four experts to discuss the ramifications of AI on global security, the environment, and political systems.

Terms
Date Label
All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Nature
Authors
Kevin Aslett
Jonathan Nagler
Joshua A. Tucker
Zeve Sanderson
William Godel
Nathaniel Persily
Jonathan Nagler
Joshua A. Tucker
Authors
Stanford Internet Observatory
News Type
Blogs
Date
Paragraphs

Risk and harm are set to scale exponentially and may strangle the opportunities generational technologies create. We have a narrow window and opportunity to leverage decades of hard won lessons and invest in reinforcing human dignity and societal resilience globally.

That which occurs offline will occur online, and increasingly there is no choice but to engage with online tools even in a formerly offline space. As the distinction between “real” and “digital” worlds inevitably blurs, we must accept that the digital future—and any trustworthy future web—will reflect all of the complexity and impossibility that would be inherent in understanding and building a trustworthy world offline.

Scaling Trust on the Web, the comprehensive final report of the Task Force for a Trustworthy Future Web, maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces. It highlights where existing approaches will not adequately meet future needs, particularly given emerging metaversal and generative AI technologies. Most importantly, it identifies immediate interventions that could catalyze safer, more trustworthy online spaces, now and in the future.

We are at a pivotal moment in the evolution of online spaces. A rare combination of regulatory sea change that will transform markets, landmarks in technological development, and newly consolidating expertise can open a window into a new and better future. Risk and harm are currently set to scale and accelerate at an exponential pace, and existing institutions, systems, and market drivers cannot keep pace. Industry will continue to drive rapid changes, but also prove unable or unwilling to solve the core problems at hand. In response, innovations in governance, research, financial, and inclusion models must scale with similar velocity.

While some harms and risks must be accepted as a key principle of protecting the fundamental freedoms that underpin that society, choices made when creating or maintaining online spaces generate risks, harms, and beneficial impacts. These choices are not value neutral, because their resulting products do not enter into neutral societies. Malignancy migrates, and harms are not equally distributed across societies. Marginalized communities suffer disproportionate levels of harm online and off. Online spaces that do not account for that reality consequently scale malignancy and marginalization.

Within industry, decades of “trust and safety” (T&S) practice has developed into a field that can illuminate the complexities of building and operating online spaces. Outside industry, civil society groups, independent researchers, and academics continue to lead the way in building collective understanding of how risks propagate via online platforms—and how products could be constructed to better promote social well-being and to mitigate harms.

Read More

pictures of attendees from the 2022 Trust and Safety Research Conference.
News

Registration Open for the 2023 Trust and Safety Research Conference

Tickets on sale for the Stanford Internet Observatory’s Trust and Safety Research to be held September 28-29, 2023. Lock in early bird prices by registering before August 1.
cover link Registration Open for the 2023 Trust and Safety Research Conference
stanford dish at sunset
Blogs

Addressing the distribution of illicit sexual content by minors online

A Stanford Internet Observatory investigation identified large networks of accounts, purportedly operated by minors, selling self-generated illicit sexual content. Platforms have updated safety measures based on the findings, but more work is needed.
cover link Addressing the distribution of illicit sexual content by minors online
Purple text with the opening language of the PATA Bill on an orange background.
Blogs

Platform Accountability and Transparency Act Reintroduced in Senate

Published in Tech Policy Press
cover link Platform Accountability and Transparency Act Reintroduced in Senate
All News button
1
Subtitle

The report from the Task Force for a Trustworthy Web maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces

Paragraphs

Hate speech is a contextual phenomenon. What offends or inflames in one context may differ from what incites violence in a different time, place, and cultural landscape. Theories of hate speech, especially Susan Benesch’s concept of “dangerous speech” (hateful speech that incites violence), have focused on the factors that cut across these paradigms. However, the existing scholarship is narrowly focused on situations of mass violence or societal unrest in America or Europe.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Subtitle

Published by Michigan Law School Scholarship Repository

Journal Publisher
Michigan Law School Scholarship Repository
Authors
Brittan Heller
Number
2
Authors
News Type
News
Date
Paragraphs
76 Platforms v. Supreme Court with Daphne Keller (part 1)
All News button
1
Subtitle

Daphne Keller spoke with the Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst about two potentially major cases currently before the Supreme Court

Authors
News Type
Blogs
Date
Paragraphs

Picture this: you are sitting in the kitchen of your home enjoying a drink. As you sip, you scroll through your phone, looking at the news of the day. You text a link to a news article critiquing your government’s stance on the press to a friend who works in media. Your sibling sends you a message on an encrypted service updating you on the details of their upcoming travel plans. You set a reminder on your calendar about a doctor’s appointment, then open your banking app to make sure the payment for this month’s rent was processed.

Everything about this scene is personal. Nothing about it is private.

Without your knowledge or consent, your phone has been infected with spyware. This technology makes it possible for someone to silently watch and taking careful notes about who you are, who you know, and what you’re doing. They see your files, have your contacts, and know the exact route you took home from work on any given day. They can even turn the microphone of your phone on and listen to the conversations you’re having in the room.

This is not some hypothetical, Orwellian drama, but a reality for thousands of people around the world. This kind of technology — once a capability only of the most technologically advanced governments — is now commercially available and for sale from numerous private companies who are known to sell it to state agencies and private actors alike. This total loss of privacy should worry everyone, but for human rights activists and journalists challenging authoritarian powers, it has become a matter of life and death. 

The companies who develop and sell this technology are only passively accountable toward governments at best, and at worse have their tacit support. And it is this lack of regulation that Marietje Schaake, the International Policy Director at the Cyber Policy Center and International Policy Fellow at Stanford HAI, is trying to change.
 

Amsterdam and Tehran: A Tale of Two Elections


Schaake did not begin her professional career with the intention of becoming Europe’s “most wired politician,” as she has frequently been dubbed by the press. In many ways, her step into politics came as something of a surprise, albeit a pleasant one.
 
“I've always been very interested in public service and trying to improve society and the lives of others, but I ran not expecting at all that I would actually get elected,” Schaake confesses.

As a candidate on the 2008 ticket for the Democrats 66 (D66) political party of the Netherlands, Schaake saw herself as someone who could help move the party’s campaign forward, but not as a serious contender in the open party election system. But when her party performed exceptionally well, at the age of 30, Schaake landed in the third position of a 30-person list vying to fill the 25 open seats available for representatives from all political parties in the Netherlands. Having taken a top spot among a field of hundreds of candidates, she found herself on her way to being a Member of the European Parliament (MEP).

Marietje Schaake participates in a panel on human rights and communication technologies as a member of the European Parliament in April 2012. Marietje Schaake participates in a panel on human rights and communication technologies as a member of the European Parliament in April 2012. Alberto Novi, Flikr

In 2009, world events collided with Schaake’s position as a newly-seated MEP. While the democratic elections in the EU were unfolding without incident, 3,000 miles away in Iran, a very different story was unfolding. Following the re-election of Mahmoud Ahmadinejad to a second term as Iran’s president, allegations of fraud and vote tampering were immediately voiced by supporters of former prime minister Mir-Hossein Mousavi, the leading candidate opposing Ahmadinejad. The protests that followed quickly morphed into the Green Movement, one of the largest sustained protest movements in Iran’s history after the Iranian Revolution of 1978 and until the protests against the death of Mahsa Amini began in September 2022.
 
With the protests came an increased wave of state violence against the demonstrators. While repression and intimidation are nothing new to autocratic regimes, in 2009 the proliferation of cell phones in the hands of an increasingly digitally connected population allowed citizens to document human rights abuses firsthand and beam the evidence directly from the streets of Tehran to the rest of the world in real-time.
 
As more and more footage poured in from the situation on the ground, Schaake, with a pre-politics background in human rights and a specific interest in civil rights, took up the case of the Green Movement as one of her first major issues in the European Parliament. She was appointed spokesperson on Iran for her political group. 

Marietje Schaake [second from the left] during a press conference on universal human rights alongside her colleauges from the European Parliament. Marietje Schaake [second from left] alongside her colleauges from the European Parliament during a press conference on universal human rights in 2010. Alberto Novi, Flikr

The Best of Tech and the Worst of Tech


But the more Schaake learned, the clearer it became that the Iranian were not the only ones using technology to stay informed about the protests. Meeting with ights defenders who had escaped from Iran to Eastern Turkey, Schaake was told anecdote after anecdote about how the Islamic Republic’s authorities were using tech to surveil, track, and censor dissenting opinions.
 
Investigations indicated that they were utilizing a technique referred to then as “deep packet inspection,” a system which allows the controller of a communications network to read and block information from going through, alter communications, and collect data about specific individuals. What was more, journalists revealed that many of the systems such regimes were using to perform this type of surveillance had been bought from, and were serviced by, Western companies.
 
For Schaake, this revelation was a turning point of her focus as a politician and the beginning of her journey into the realm of cyber policy and tech regulation.
 
“On the one hand, we were sharing statements urging to respect the human rights of the demonstrators. And then it turned out that European companies were the ones selling this monitoring equipment to the Iranian regime. It became immediately clear to me that if technology was to play a role in enhancing human rights and democracy, we couldn’t simply trust the market to make it so; we needed to have rules,” Schaake explained.

We have to have a line in the sand and a limit to the use of this technology. It’s extremely important, because this is proliferating not only to governments, but also to non-state actors.
Marietje Schaake
International Policy Director at the Cyber Policy Center

The Transatlantic Divide


But who writes the rules? When it comes to tech regulation, there is longstanding unease between the private and public sectors, and a different approach between the East and West shores of the Atlantic. In general, EU member countries favor oversight of the technology sector and have supported legislation like the General Data Protection Regulation (GDPR) and Digital Services Act to protect user privacy and digital human rights. On the other hand, major tech companies — many of them based in North America — favor the doctrine of self-regulation and frequently cite claims to intellectual property or widely-defined protections such as Section 230 as a justification for keeping government oversight at arm’s length. Efforts by governing bodies like the European Union to legislate privacy and transparency requirements are with raised hackles 
 
It’s a feeling Schaake has encountered many times in her work. “When you talk to companies in Silicon Valley, they make it sound as if Europeans are after them and that these regulations are weapons meant to punish them,” she says.
 
But the need to place checks on those with power is rooted in history, not histrionics, says Schaake. Memories of living under the eye of surveillance states such as the Soviet Union and East Germany still are fresh on many European’s minds. The drive to protect privacy is as much about keeping the government in check as it is about reining in the outsized influence and power of private technology companies, Schaake asserts.
 

Big Brother Is Watching


In the last few years, the momentum has begun to shift. 
 
In 2020, a joint reporting effort by The Guardian, The Washington Post, Le Monde, Proceso, and over 80 journalists at a dozen additional news outlets worked in partnership with Amnesty International and Forbidden Stories to publish the Pegasus Project, a detailed report showing that spyware from the private company NSO Group was used to target, track, and retaliate against tens of thousands journalists, activists, civil rights leaders, and even against prominent politicians around the world.
 
This type of surveillance has innovated quickly beyond the network monitoring undertaken by regimes like Iran in the 2000s, and taps into the most personal details of an individual’s device, data, and communications. In the absence of widespread regulation, companies like NSO Group have been able to develop commercial products with capabilities as sophisticated as state intelligence bureaus. In many cases, “no-click” infections are now possible, meaning a device can be targeted and have the spyware installed without the user ever knowing or having any suspicions that they have become a victim of covert surveillance.

Marietje Schaake [left] moderates a panel at the 2023 Summit for Democracy with Neal Mohan, CEO of YouTube; John Scott-Railton, Senior Researcher at Citizen Lab; Avril Haines, U.S. Director of National Intelligence; and Alejandro N. Mayorkas, U.S. Secretary of Homeland Security. Marietje Schaake at the 2023 Summit for Democracy with Neal Mohan, CEO of YouTube; John Scott-Railton, Senior Researcher at Citizen Lab; Avril Haines, U.S. Director of National Intelligence; and Alejandro Mayorkas, U.S. Secretary of Homeland Security. U.S. Department of State

“If we were to create a spectrum of harmful technologies, spyware could easily take the top position,” said Schaake, speaking as the moderator of a panel on “Countering the Misuse of Technology and the Rise of Digital Authoritarianism” at the 2023 Summit for Democracy co-hosted by U.S. President Joe Biden alongside the governments of Costa Rica, the Netherlands, Republic of Korea, and Republic of Zambia.
 
Revelations like those of the Pegasus Project have helped spur what Schaake believes is long-overdue action from the United States on regulating this sector of the tech world. On March 27, 2023, President Biden signed an executive order prohibiting the operational use of commercial spyware products by the United States government. It is the first time such an action has been formally taken in Washington.
 
For Schaake, the order is a “fantastic first step,” but she also cautions that there is still much more that needs to be done. The use of spyware made by the government is not limited by Biden's executive order, and neither is the use by individuals who can get their hands on these tools. 

Human Rights vs. National Security


One of Schaake’s main concerns is the potential for governmental overreach in the pursuit of curtailing the influence of private companies.
 
Schaake explains, “What's interesting is that while the motivation in Europe for this kind of regulation is very much anchored in fundamental rights, in the U.S., what typically moves the needle is a call to national security, or concern for China.”
 
It is important to stay vigilant about how national security can become a justification for curtailing civil liberties. Writing for the Financial Times, Schaake elaborated on the potential conflict of interest the government has in regulating tech more rigorously:
 
“The U.S. government is right to regulate technology companies. But the proposed measures, devised through the prism of national security policy, must also pass the democracy test. After 9/11, the obsession with national security led to warrantless wiretapping and mass data collection. I back moves to curb the outsized power of technology firms large and small. But government power must not be abused.”
 
While Schaake hopes well-established democracies will do more to lead by example, she also acknowledges that the political will to actually step up to do so is often lacking. In principle, countries rooted in the rule of law and the principles of human rights decry the use of surveillance technology beyond their own borders. But in practice, these same governments are also sometimes customers of the surveillance industrial complex. 

It’s up to us to guarantee the upsides of technology and limit its downsides. That’s how we are going to best serve our democracy in this moment.
Marietje Schaake
International Policy Director at the Cyber Policy Center

Schaake has been trying to make that disparity an impossible needle for nations to keep threading. For over a decade, she has called for an end to the surveillance industry and has worked on developing export controls rules for the sale of surveillance technology from Europe to other parts of the world. But while these measures make it harder for non-democratic regimes to purchase these products from the West, the legislation is still limited in its ability to keep European and Western nations from importing spyware systems like Pegasus back into the West. And for as long as that reality remains, it undermines the credibility of the EU and West as a whole, says Schaake. 
 
Speaking at the 2023 Summit for Democracy, Schaake urged policymakers to keep the bigger picture in mind when it comes to the risks of unaccountable, ungoverned spyware industries. “We have to have a line in the sand and a limit to the use of this technology. It’s extremely important, because this is proliferating not only to governments, but also to non-state actors. This is not the world we want to live in.”

 

Building Momentum for the Future


Drawing those lines in the sands is crucial not just for the immediate safety and protection of individuals who have been targeted with spyware but applies to other harms of technology vis-a-vis the long-term health of democracy.

“The narrative that technology is helping people's democratic rights, or access to information, or free speech has been oversold, whereas the need to actually ensure that democratic principles govern technology companies has been underdeveloped,” Schaake argues.

While no longer an active politician, Schaake has not slowed her pace in raising awareness and contributing her expertise to policymakers trying to find ways of threading the digital needle on tech regulation. Working at the Cyber Policy Center at the Freeman Spogli Institute for International Studies (FSI), Schaake has been able to combine her experiences in European politics with her academic work in the United States against the backdrop of Silicon Valley, the home-base for many of the world’s leading technology companies and executives.
 
Though now half a globe away from the European Parliament, Schaake’s original motivations to improve society and people’s lives have not dimmed.

Marietje Schaake speaking at conference at Stanford University Though no longer working in government, Schaake, seen here at a conference on regulating Big Tech hosted by Stanford's Human-Centered Intelligence (HAI), continues to research and advocate for better regulation of technology industries. Midori Yoshimura

“It’s up to us to guarantee the upsides of technology and limit its downsides. That’s how we are going to best serve our democracy in this moment,” she says.
 
Schaake is clear-eyed about the hurdles still ahead on the road to meaningful legislation about tech transparency and human rights in digital spaces. With a highly partisan Congress in the United States and other issues like the war in Ukraine and concerns over China taking center stage, it will take time and effort to build a critical mass of political will to tackle these issues. But Biden’s executive order and the discussion of issues like digital authoritarianism at the Summit for Democracy also give Schaake hope that progress can be made.
 
“The bad news is we're not there yet. The good news is there's a lot of momentum for positive change and improvement, and I feel like people are beginning to understand how much it is needed.”
 
And for anyone ready to jump into the fray and make an impact, Schaake adds a standing invitation: “I’m always happy to grab a coffee and chat. Let’s talk!”



The complete recording of "Countering the Misuse of Technology and the Rise of Digital Authoritarianism," the panel Marietje Schaake moderated at the 2023 Summit for Democracy, is available below.

Read More

All News button
1
Subtitle

A transatlantic background and a decade of experience as a lawmaker in the European Parliament has given Marietje Schaake a unique perspective as a researcher investigating the harms technology is causing to democracy and human rights.

Paragraphs

The Computer Fraud and Abuse Act (CFAA) provides a civil cause of action for computer hacking victims that have suffered certain types of harm. Of these harms, the one most commonly invoked by plaintiffs is having suffered $5,000 or more of cognizable “loss” as defined by the statute. In its first-ever CFAA case, 2021’s Van Buren v. United States, the Supreme Court included intriguing language that “loss” in civil cases should be limited to “technological harms” constituting “the typical consequences of hacking.” To date, lower courts have only followed the Court’s interpretation if their circuit already interpreted “loss” narrowly pre-Van Buren and have continued to approach “loss” broadly otherwise.

Van Buren did not fully dissipate the legal risks the CFAA has long posed to a particular community: people who engage in good-faith cybersecurity research. Discovering and reporting security vulnerabilities in software and hardware risks legal action from vendors displeased with unflattering revelations about their products’ flaws. Research activities have even led to criminal investigations at times. Although Van Buren narrowed the CFAA’s scope and prompted reforms in federal criminal charging policy, researchers continue to face some legal exposure. The CFAA still lets litigious vendors “shoot the messenger” by suing over security research that did them no harm. Spending just $5,000 addressing a vulnerability is sufficient to allow the vendor to sue the researcher who reported it, because such remediation costs qualify as “loss” even in courts that read that term narrowly.

To mitigate the CFAA’s legal risk to researchers, a common proposal is a statutory safe harbor for security research. Such proposals walk a fine line between being unduly byzantine for good-faith actors to follow and lax enough to invite abuse by malicious actors. Instead of the safe harbor approach, this article recommends a simpler way to reduce litigation over harmless research: follow the money.

The Article proposes (1) amending the CFAA’s “loss” definition to prevent vulnerability remediation costs alone from satisfying the $5,000 standing threshold absent any other alleged loss, and (2) adding a fee-shifting provision that can be invoked where plaintiffs’ losses do not meet that threshold. Tightening up the “loss” calculus would disqualify retaliatory litigation against beneficial (or at least benign) security research while preserving victims’ ability to seek redress where well-intended research activities do cause harm. Fee-shifting would deter weak CFAA claims and give the recipients of legal threats some leverage to fight back. Coupled with the Van Buren decision, these changes would reach beyond the context of vendor versus researcher: they would help rein in the CFAA’s rampant misuse over behavior far afield from the law’s core anti-hacking purpose.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Richmond Journal of Law & Technology
Authors
Riana Pfefferkorn
Number
1
Subscribe to United States