Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

Authors
Riana Pfefferkorn
Riana Pfefferkorn
News Type
Blogs
Date
Paragraphs

When we’re faced with a video recording of an event—such as an incident of police brutality—we can generally trust that the event happened as shown in the video. But that may soon change, thanks to the advent of so-called “deepfake” videos that use machine learning technology to show a real person saying and doing things they haven’t.

This technology poses a particular threat to marginalized communities. If deepfakes cause society to move away from the current “seeing is believing” paradigm for video footage, that shift may negatively impact individuals whose stories society is already less likely to believe. The proliferation of video recording technology has fueled a reckoning with police violence in the United States, recorded by bystanders and body-cameras. But in a world of pervasive, compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools meant to increase trust in videos are in development, but these technologies, though well-intentioned, could end up being used to discredit already marginalized voices. 

(Content Note: Some of the links in this piece lead to graphic videos of incidents of police violence. Those links are denoted in bold.)

Recent police killings of Black Americans caught on camera have inspired massive protests that have filled U.S. streets in the past year. Those protests endured for months in Minneapolis, where former police officer Derek Chauvin was convicted this week in the murder of George Floyd, a Black man. During Chauvin’s trial, another police officer killed Daunte Wright just outside Minneapolis, prompting additional protests as well as the officer’s resignation and arrest on second-degree manslaughter charges. She supposedly mistook her gun for her Taser—the same mistake alleged in the fatal shooting of Oscar Grant in 2009, by an officer whom a jury later found guilty of involuntary manslaughter (but not guilty of a more serious charge). All three of these tragic deaths—George Floyd, Daunte Wright, Oscar Grant—were documented in videos that were later used (or, in Wright’s case, seem likely to be used) as evidence at the trials of the police officers responsible. Both Floyd’s and Wright’s deaths were captured by the respective officers’ body-worn cameras, and multiple bystanders with cell phones recorded the Floyd and Grant incidents. Some commentators credit a 17-year-old Black girl’s video recording of Floyd’s death for making Chauvin’s trial happen at all.

The growth of the movement for Black lives in the years since Grant’s death in 2009 owes much to the rise in the availability, quality, and virality of bystander videos documenting police violence, but this video evidence hasn’t always been enough to secure convictions. From Rodney King’s assailants in 1992 to Philando Castile’s shooter 25 years later, juries have often declined to convict police officers even in cases where wanton police violence or killings are documented on video. Despite their growing prevalence, police bodycams have had mixed results in deterring excessive force or impelling accountability. That said, bodycam videos do sometimes make a difference, helping to convict officers in the killings of Jordan Edwards in Texas and Laquan McDonald in Chicago. Chauvin’s defense team pitted bodycam footage against the bystander videos employed by the prosecution, and lost.

What makes video so powerful? Why does it spur crowds to take to the streets and lawyers to showcase it in trials? It’s because seeing is believing. Shot at differing angles from officers’ point of view, bystander footage paints a fuller picture of what happened. Two people (on a jury, say, or watching a viral video online) might interpret a video two different ways. But they’ve generally been able to take for granted that the footage is a true, accurate record of something that really happened. 

That might not be the case for much longer. It’s now possible to use artificial intelligence to generate highly realistic “deepfake” videos showing real people saying and doing things they never said or did, such as the recent viral TikTok videos depicting an ersatz Tom Cruise. You can also find realistic headshots of people who don’t exist at all on the creatively-named website thispersondoesnotexist.com. (There’s even a cat version.) 

While using deepfake technology to invent cats or impersonate movie stars might be cute, the technology has more sinister uses as well. In March, the Federal Bureau of Investigation issued a warning that malicious actors are “almost certain” to use “synthetic content” in disinformation campaigns against the American public and in criminal schemes to defraud U.S. businesses. The breakneck pace of deepfake technology’s development has prompted concerns that techniques for detecting such imagery will be unable to keep up. If so, the high-tech cat-and-mouse game between creators and debunkers might end in a stalemate at best. 

If it becomes impossible to reliably prove that a fake video isn’t real, a more feasible alternative might be to focus instead on proving that a real video isn’t fake. So-called “verified at capture” or “controlled-capture” technologies attach additional metadata to imagery at the moment it’s taken, to verify when and where the footage was recorded and reveal any attempt to tamper with the data. The goal of these technologies, which are still in their infancy, is to ensure that an image’s integrity will stand up to scrutiny. 

Photo and video verification technology holds promise for confirming what’s real in the age of “fake news.” But it’s also cause for concern. In a society where guilty verdicts for police officers remain elusive despite ample video evidence, is even more technology the answer? Or will it simply reinforce existing inequities? 

The “ambitious goal” of adding verification technology to smartphone chipsets necessarily entails increasing the cost of production. Once such phones start to come onto the market, they will be more expensive than lower-end devices that lack this functionality. And not everyone will be able to afford them. Black Americans and poor Americans have lower rates of smartphone ownership than whites and high earners, and are more likely to own a “dumb” cell phone. (The same pattern holds true with regard to educational attainment and urban versus rural residence.) Unless and until verification technology is baked into even the most affordable phones, it risks replicating existing disparities in digital access. 

That has implications for police accountability, and, by extension, for Black lives. Primed by societal concerns about deepfakes and “fake news,” juries may start expecting high-tech proof that a video is real. That might lead them to doubt the veracity of bystander videos of police brutality if they were captured on lower-end phones that lack verification technology. Extrapolating from current trends in phone ownership, such bystanders are more likely to be members of marginalized racial and socioeconomic groups. Those are the very people who, as witnesses in court, face an uphill battle in being afforded credibility by juries. That bias, which reared its ugly head again in the Chauvin trial, has long outlived the 19th-century rules that explicitly barred Black (and other non-white) people from testifying for or against white people on the grounds that their race rendered them inherently unreliable witnesses. 

In short, skepticism of “unverified” phone videos may compound existing prejudices against the owners of those phones. That may matter less in situations where a diverse group of numerous eyewitnesses record a police brutality incident on a range of devices. But if there is only a single bystander witness to the scene, the kind of phone they own could prove significant.

The advent of mobile devices empowered Black Americans to force a national reckoning with police brutality. Ubiquitous, pocket-sized video recorders allow average bystanders to document the pandemic of police violence. And because seeing is believing, those videos make it harder for others to continue denying the problem exists. Even with the evidence thrust under their noses, juries keep acquitting police officers who kill Black people. Chauvin’s conviction this week represents an exception to recent history: Between 2005 and 2019, of the 104 law enforcement officers charged with murder or manslaughter in connection with a shooting while on duty, 35 were convicted

The fight against fake videos will complicate the fight for Black lives. Unless it is equally available to everyone, video verification technology may not help the movement for police accountability, and could even set it back. Technological guarantees of videos’ trustworthiness will make little difference if they are accessible only to the privileged, whose stories society already tends to believe. We might be able to tech our way out of the deepfakes threat, but we can’t tech our way out of America’s systemic racism. 

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory

Read More

Riana Pfefferkorn
News

Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar

Riana Pfefferkorn joined the Stanford Internet Observatory as a research scholar in December. She comes from Stanford’s Center for Internet and Society, where she was the Associate Director of Surveillance and Cybersecurity.
cover link Q&A with Riana Pfefferkorn, Stanford Internet Observatory Research Scholar
A member of the All India Student Federation teaches farmers about social media and how to use such tools as part of ongoing protests against the government. (Pradeep Gaur / SOPA Images / Sipa via Reuters Connect)
Blogs

New Intermediary Rules Jeopardize the Security of Indian Internet Users

cover link New Intermediary Rules Jeopardize the Security of Indian Internet Users
All News button
1
Authors
Daphne Keller
News Type
Blogs
Date
Paragraphs

I am a huge fan of transparency about platform content moderation. I’ve considered it a top policy priority for years, and written about it in detail (with Paddy Leerssen, who also wrote this great piece about recommendation algorithms and transparency). I sincerely believe that without it, we are unlikely to correctly diagnose current problems or arrive at wise legal solutions.

So it pains me to admit that I don’t really know what “transparency” I’m asking for. I don’t think many other people do, either. Researchers and public interest advocates around the world can agree that more transparency is better. But, aside from people with very particular areas of interest (like political advertising), almost no one has a clear wish list. What information is really important? What information is merely nice to have? What are the trade-offs involved?

That imprecision is about to become a problem, though it’s a good kind of problem to have. A moment of real political opportunity is at hand. Lawmakers in the USEurope, and elsewhere are ready to make some form of transparency mandatory. Whatever specific legal requirements they create will have huge consequences. The data, content, or explanations they require platforms to produce will shape our future understanding of platform operations, and our ability to respond — as consumers, as advocates, or as democracies. Whatever disclosures the laws don’t require, may never happen.

It’s easy to respond to this by saying “platforms should track all the possible data, we’ll see what’s useful later!” Some version of this approach might be justified for the very biggest “gatekeeper” or “systemically important” platforms. Of course, making Facebook or Google save all that data would be somewhat ironic, given the trouble they’ve landed in by storing similar not-clearly-needed data about their users in the past. (And the more detailed data we store about particular takedowns, the likelier it is to be personally identifiable.)

For any platform, though, we should recognize that the new practices required for transparency reporting comes at a cost. That cost might include driving platforms to adopt simpler, blunter content rules in their Terms of Service. That would reduce their expenses in classifying or explaining decisions, but presumably lead to overly broad or narrow content prohibitions. It might raise the cost of adding “social features” like user comments enough that some online businesses, like retailers or news sites, just give up on them. That would reduce some forms of innovation, and eliminate useful information for Internet users. For small and midsized platforms, transparency obligations (like other expenses related to content moderation) might add yet another reason to give up on competing with today’s giants, and accept an acquisition offer from an incumbent that already has moderation and transparency tools. Highly prescriptive transparency obligations might also drive de facto standardization and homogeneity in platform rules, moderation practices, and features.

None of these costs provides a reason to give up on transparency — or even to greatly reduce our expectations. But all of them are reasons to be thoughtful about what we ask for. It would be helpful if we could better quantify these costs, or get a handle on what transparency reporting is easier and harder to do in practice.

I’ve made a (very in the weeds) list of operational questions about transparency reporting, to illustrate some issues that are likely to arise in practice. I think detailed examples like these are helpful in thinking through both which kinds of data matter most, and how much precision we need within particular categories. For example, I personally want to know with great precision how many government orders a platform received, how it responded, and whether any orders led to later judicial review. But to me it seems OK to allow some margin of error for platforms that don’t have standardized tracking and queuing tools, and that as a result might modestly mis-count TOS takedowns (either by absolute numbers or percent).

I’ll list that and some other recommendations below. But these “recommendations” are very tentative. I don’t know enough to have a really clear set of preferences yet. There are things I wish I could learn from technologists, activists, and researchers first. The venues where those conversations would ordinarily happen — and, importantly, where observers from very different backgrounds and perspectives could have compared the issues they see, and the data they most want — have been sadly reduced for the past year.

So here is my very preliminary list:

  • Transparency mandates should be flexible enough to accommodate widely varying platform practices and policies. Any de facto push toward standardization should be limited to the very most essential data.
  • The most important categories of data are probably the main ones listed in the DSA: number of takedowns, number of appeals, number of successful appeals. But as my list demonstrates, those all can become complicated in practice.
  • It’s worth taking the time to get legal transparency mandates right. That may mean delegating exact transparency rules to regulatory agencies in some countries, or conducting studies prior to lawmaking in others.
  • Once rules are set, lawmakers should be very reluctant to move the goalposts. If a platform (especially a smaller one) invests in rebuilding its content moderation tools to track certain categories of data, it should not have to overhaul those tools soon because of changed legal requirements.
  • We should insist on precise data in some cases, and tolerate more imprecision in others (based on the importance of the issue, platform capacity, etc.). And we should take the time to figure out which is which.
  • Numbers aren’t everything. Aggregate data in transparency reports ultimately just tell us what platforms themselves think is going on. To understand what mistakes they make, or what biases they may exhibit, independent researchers need to see the actual content involved in takedown decisions. (This in turn raises a slough of issues about storing potentially unlawful content, user privacy and data protection, and more.)

It’s time to prioritize. Researchers and civil society should assume we are operating with a limited transparency “budget,” which we must spend wisely — asking for the information we can best put to use, and factoring in the cost. We need better understanding of both research needs and platform capabilities to do this cost-benefit analysis well. I hope that the window of political opportunity does not close before we manage to do that.

Daphne Keller

Daphne Keller

Director of the Program on Platform Regulation
BIO

Read More

Cover of the EIP report "The Long Fuse: Misinformation and the 2020 Election"
News

Election Integrity Partnership Releases Final Report on Mis- and Disinformation in 2020 U.S. Election

Researchers from Stanford University, the University of Washington, Graphika and Atlantic Council’s DFRLab released their findings in ‘The Long Fuse: Misinformation and the 2020 Election.’
cover link Election Integrity Partnership Releases Final Report on Mis- and Disinformation in 2020 U.S. Election
Daphne Keller QA
Q&As

Q&A with Daphne Keller of the Program on Platform Regulation

Keller explains some of the issues currently surrounding platform regulation
cover link Q&A with Daphne Keller of the Program on Platform Regulation
twitter takedown headliner
Blogs

Analysis of February 2021 Twitter Takedowns

In this post and in the attached reports we investigate a Twitter network attributed to actors in Armenia, Iran, and Russia.
cover link Analysis of February 2021 Twitter Takedowns
All News button
1
Subtitle

In a new blog post, Daphne Keller, Director of the Program on Platform Regulation at the Cyber Policy Center, looks at the need for transparency when it comes to content moderation and asks, what kind of transparency do we really want?

-

End-to-end encrypted (E2EE) communications have been around for decades, but the deployment of default E2EE on billion-user platforms has new impacts for user privacy and safety. The deployment comes with benefits to both individuals and society but it also creates new risks, as long-existing models of messenger abuse can now flourish in an environment where automated or human review cannot reach. New E2EE products raise the prospect of less understood risks by adding discoverability to encrypted platforms, allowing contact from strangers and increasing the risk of certain types of abuse. This workshop will place a particular focus on platform benefits and risks that impact civil society organizations, with a specific focus on the global south. Through a series of workshops and policy papers, the Stanford Internet Observatory is facilitating open and productive dialogue on this contentious topic to find common ground. 

An important defining principle behind this workshop series is the explicit assumption that E2EE is here to stay. To that end, our workshops have set aside any discussion of exceptional access (aka backdoor) designs. This debate has raged between industry, academic cryptographers and law enforcement for decades and little progress has been made. We focus instead on interventions that can be used to reduce the harm of E2E encrypted communication products that have been less widely explored or implemented. 

Submissions for working papers and requests to attend will be accepted up to 10 days before the event. Accepted submitters will be invited to present or attend our upcoming workshops. 

SUBMIT HERE

Webinar

Workshops
Authors
Charles Mok
Kenny Huang
Kenny Huang
News Type
News
Date
Paragraphs

In new work, Global Digital Policy Incubator (GDPi) Research Scholar, Charles Mok, along with Kenny Huang, a leader in Asia’s internet communities, examine Taiwan’s reliance on fragile external systems and how that reliance exposes Taiwan to threats like geopolitical conflicts, cyberattacks and natural disasters. The key, write Mok and Huang, is strengthening governance, enhancing investment, and fostering international cooperation in order to secure a resilient future.

For more, read the full paper, out now and free to download.

Read More

collage of images at the cyber policy center, people testifying, people at events
News

Agenda for the Trust & Safety Research Conference is now Live!

Speaker line-up for the third annual Trust & Safety Research Conference announced.
cover link Agenda for the Trust & Safety Research Conference is now Live!
Skyline of Taipei at dawn.
Blogs

Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency

Through the Policy Change Studio, students partner with international organizations to propose policy-driven solutions to new digital challenges.
cover link Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency
All News button
1
Subtitle

A new paper from Charles Mok of GDPi examines the current landscape of Taiwan’s Internet Infrastructure

Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

The agenda for the 2024 Trust & Safety Research Conference is now available. The conference includes two packed days of lightning talks, research presentations, panels, workshops and a poster session. The conference has an amazing lineup of speakers, including keynote speakers Camille François (Associate Professor of the Practice of International and Public Affairs, Columbia University) and Arvind Narayanan (Professor of Computer Science and Director of the Center for Information Technology Policy, Princeton University.)

The Trust & Safety Research Conference convenes a diverse group of academics, researchers, and practitioners from fields including computer science, sociology, law, and political science. It features networking opportunities including happy hours, and complimentary breakfast and lunch are provided on both days.

Register now and save a spot before early bird pricing ends on August 1.

More details on the conference website

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
cover link 3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
cover link Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
cover link The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Speaker line-up for the third annual Trust & Safety Research Conference announced.

Date Label
Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

Registration is now open for the third annual Trust & Safety Research Conference at Stanford University from September 26-27, 2024. Join us for two days of cross-professional presentations and conversations designed to push forward research on trust and safety.

Hosted at Stanford University’s Frances. C. Arrillaga Alumni Center, the Trust & Safety Research Conference convenes participants working on trust and safety issues across academia, industry, civil society, and government. The event brings together a cross-disciplinary group of academics and researchers in fields including computer science, sociology, law, and political science to connect with practitioners and policymakers on challenges and new ideas for studying and addressing online trust and safety issues.

Your ticket provides access to:

  • Two days of talks, panels, workshops and breakouts
  • Breakfast and lunch both days of the conference
  • Networking opportunities, including happy hours and poster sessions

Early bird tickets are $150 for attendees from academia, civil society and government, and $600 for attendees from industry. Ticket prices go up August 1, 2024.

CONFERENCE WEBSITE • REGISTER

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
cover link 3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
cover link Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
cover link The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Tickets on sale for the third annual Trust & Safety Research Conference to be held September 26-27, 2024. Lock in early bird prices by registering before August 1.

Authors
Clifton B. Parker
News Type
News
Date
Paragraphs

While the potential benefits of artificial intelligence are significant and far-reaching, AI’s potential dangers to the global order necessitates an astute governance and policy-making approach, panelists said at the Freeman Spogli Institute for International Studies (FSI) on May 23.

An alumni event at the Ford Dorsey Master’s in International Policy (MIP) program featured a panel discussion on “The Impact of AI on the Global Order.” Participants included Anja Manuel, Jared Dunnmon, David Lobell, and Nathaniel Persily. The moderator was Francis Fukuyama, Olivier Nomellini senior fellow at FSI and director of the master’s program.

Manuel, an affiliate at FSI’s Center for International Security and Cooperation and executive director of the Aspen Strategy Group, said that what “artificial intelligence is starting to already do is it creates superpowers in the way it intersects with other technologies.”

An alumna of the MIP program, Manuel noted an experiment a year ago in Switzerland where researchers asked an AI tool to come up with new nerve agents – and it did very rapidly, 40,000 of them. On the subject of strategic nuclear deterrence, AI capabilities may upend existing policy approaches. Though about 30 countries have voluntarily signed up to follow governance standards in how AI would be used in military conflicts, the future is unclear.

“I worry a lot,” said Manuel, noting that AI-controlled fighter jets will likely be more effective than human-piloted craft. “There is a huge incentive to escalate and to let the AI do more and more and more of the fighting, and I think the U.S. government is thinking it through very carefully.”
 


AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.
Nathaniel Persily
Co-director of the Cyber Policy Center


Geopolitical Competition


Dunnmon, a CISAC affiliate and senior advisor to the director of the Defense Innovation Unit, spoke about the “holistic geopolitical competition” among world powers in the AI realm as these systems offer “unprecedented speed and unprecedented scale.”

“Within that security lens, there’s actually competition across the entirety of the technical AI stack,” he said.

Dunnmon said an underlying security question involves whether a given AI software is running on top of libraries that are sourced from Western companies then if software is being built on top of an underlying library stack owned by state enterprises. “That’s a different world.”

He said that “countries are competing for data, and it’s becoming a battlefield of geopolitical competition.”

Societal, Environmental Implications


Lobell, a senior fellow at FSI and the director of the Center for Food Security and the Environment, said his biggest concern is about how AI might change the functioning of societies as well as possible bioterrorism.

“Any environment issue is basically a collective action problem, and you need well-functioning societies with good governance and political institutions, and if that crumbles, I don’t think we have much hope.”

On the positive aspects of AI, he said the combination of AI and synthetic biology and gene editing are starting to produce much faster production cycles of agricultural products, new breeds of animals, and novel foods. One company found how to make a good substitute for milk if pineapple, cabbage and other ingredients are used.

Lobell said that AI can understand which ships are actually illegally capturing seafood, and then they can trace that back to where they eventually offload such cargo. In addition, AI can help create deforestation-free supply chains, and AI mounted on farm tractors can help reduce 90% of the chemicals being used that pose environmental risks.

“There’s clear tangible progress being made with these technologies in the realm of the environment, and we can continue to build on that,” he added.
 


Countries are competing for data, and it’s becoming a battlefield of geopolitical competition.
Jared Dunnmon
Affiiate at the Center for International Security and Cooperation (CISAC)


AI and Democracy


Persily, a senior fellow and co-director of FSI’s Cyber Policy Center, said, “AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.”

He noted, “AI is not social media,” even though it can interact with social media. Persily said AI is so much more pervasive and significant than a given platform such as Facebook. Problems arise in the areas of privacy, antitrust, bias and disinformation, but AI issues are “characteristically different” than social media.

“One of the ways that AI is different than social media is the fact that they are open-source tools. We need to think about this in a little bit of a different way, which is that it is not just a few companies that can be regulated on closed systems,” Persily said.

As a result, AI tools are available to all of us, he said. “There is the possibility that some of the benefits of AI could be realized more globally,” but there are also risks. For example, in the year and a half since OpenAI released ChatGPT, which is open sourced, child pornography has multiplied on the Internet.

“The democratization of AI will lead to fundamental challenges to establish legacy infrastructure for the governance of the propagation of content,” Persily said.

Balance of AI Power


Fukuyama pointed out that an AI lab at Stanford could not afford leading-edge technology, yet countries such as the U.S. and China have deeper resources to fund AI endeavors.

“This is something obviously that people are worried about,” he said, “whether these two countries are going to dominate the AI race and the AI world and disadvantage everybody.”

Manuel said that most of AI is now operating with voluntary governance – “patchwork” – and that dangerous things involving AI can be done now. “In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.” 

Lobell said that while it might seem universities can’t stay up to speed with industry, people have shown they can reproduce those models’ performances just days after their releases.
 


In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.
Anja Manuel
Affiiate at the Center for International Security and Cooperation (CISAC)


On regulation — the European Union is currently weighing legislation — Persily said it would be difficult to enforce regulations and interpret risk assessments, so what is needed is a “transparency regime” and an infrastructure so civil entities have a clear view on what models are being released – yet this will be complex.

“I don’t think we even really understand what a sophisticated, full-on AI audit of these systems would look like,” he said.

Dunnmon suggested that an AI governance entity could be created that’s similar to how the U.S. Food and Drug Agency reviews pharmaceuticals before release.

In terms of AI and military conflicts, he spoke about the need for AI and humans to understand the rewards and risks involved, and in the case of the latter, how the risk compares to the “next best option.”

“How do you communicate that risk, how do you assess that risk, and how do you make sure the right person with the right equities and the right understanding of those risks is making that risk trade-off decision?” he asked.



The Ford Dorsey Master’s in International Policy program was established in 1982 to provide students with the knowledge and skills necessary to analyze and address complex global challenges in a rapidly changing world, and to prepare the next generation of leaders for public and private sector careers in international policymaking and implementation.

Read More

AI
News

Research can help to tackle AI-generated disinformation

New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.
cover link Research can help to tackle AI-generated disinformation
The Right Honorable Jacinda Ardern and a delegation from the Christchirch Call joined Stanford researchers at the Freeman Spogli Institute for International Studies for a roundtable discussion on technology governance and regulation.
News

Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation

Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern, a delegation from the Christchurch Call joined Stanford scholars to discuss how to address the challenges posed by emerging technologies.
cover link Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation
All News button
1
Subtitle

At a gathering for alumni, the Ford Dorsey Master's in International Policy program hosted four experts to discuss the ramifications of AI on global security, the environment, and political systems.

Terms
Date Label
Paragraphs

Hate speech is a contextual phenomenon. What offends or inflames in one context may differ from what incites violence in a different time, place, and cultural landscape. Theories of hate speech, especially Susan Benesch’s concept of “dangerous speech” (hateful speech that incites violence), have focused on the factors that cut across these paradigms. However, the existing scholarship is narrowly focused on situations of mass violence or societal unrest in America or Europe.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Subtitle

Published by Michigan Law School Scholarship Repository

Journal Publisher
Michigan Law School Scholarship Repository
Authors
Brittan Heller
Number
2
Paragraphs

The Computer Fraud and Abuse Act (CFAA) provides a civil cause of action for computer hacking victims that have suffered certain types of harm. Of these harms, the one most commonly invoked by plaintiffs is having suffered $5,000 or more of cognizable “loss” as defined by the statute. In its first-ever CFAA case, 2021’s Van Buren v. United States, the Supreme Court included intriguing language that “loss” in civil cases should be limited to “technological harms” constituting “the typical consequences of hacking.” To date, lower courts have only followed the Court’s interpretation if their circuit already interpreted “loss” narrowly pre-Van Buren and have continued to approach “loss” broadly otherwise.

Van Buren did not fully dissipate the legal risks the CFAA has long posed to a particular community: people who engage in good-faith cybersecurity research. Discovering and reporting security vulnerabilities in software and hardware risks legal action from vendors displeased with unflattering revelations about their products’ flaws. Research activities have even led to criminal investigations at times. Although Van Buren narrowed the CFAA’s scope and prompted reforms in federal criminal charging policy, researchers continue to face some legal exposure. The CFAA still lets litigious vendors “shoot the messenger” by suing over security research that did them no harm. Spending just $5,000 addressing a vulnerability is sufficient to allow the vendor to sue the researcher who reported it, because such remediation costs qualify as “loss” even in courts that read that term narrowly.

To mitigate the CFAA’s legal risk to researchers, a common proposal is a statutory safe harbor for security research. Such proposals walk a fine line between being unduly byzantine for good-faith actors to follow and lax enough to invite abuse by malicious actors. Instead of the safe harbor approach, this article recommends a simpler way to reduce litigation over harmless research: follow the money.

The Article proposes (1) amending the CFAA’s “loss” definition to prevent vulnerability remediation costs alone from satisfying the $5,000 standing threshold absent any other alleged loss, and (2) adding a fee-shifting provision that can be invoked where plaintiffs’ losses do not meet that threshold. Tightening up the “loss” calculus would disqualify retaliatory litigation against beneficial (or at least benign) security research while preserving victims’ ability to seek redress where well-intended research activities do cause harm. Fee-shifting would deter weak CFAA claims and give the recipients of legal threats some leverage to fight back. Coupled with the Van Buren decision, these changes would reach beyond the context of vendor versus researcher: they would help rein in the CFAA’s rampant misuse over behavior far afield from the law’s core anti-hacking purpose.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Richmond Journal of Law & Technology
Authors
Riana Pfefferkorn
Number
1
-
Karen Nershi headshot on a blue background with Fall Seminar Series in white font

Join the Cyber Policy Center and moderator  Daniel Bateyko in conversation with Karen Nershi for How Strong Are International Standards in Practice?:  Evidence from Cryptocurrency Transactions. 

The rise of cryptocurrency (decentralized digital currency) presents challenges for state regulators given its connection to illegal activity and pseudonymous nature, which has allowed both individuals and businesses to circumvent national laws through regulatory arbitrage. Karen Nershi assess the degree to which states have managed to regulate cryptocurrency exchanges, providing a detailed study of international efforts to impose common regulatory standards for a new technology. To do so, she introduces a dataset of cryptocurrency transactions collected during a two-month period in 2020 from exchanges in countries around the world and employ bunching estimation to compare levels of unusual activity below a threshold at which exchanges must screen customers for money laundering risk. She finds that exchanges in some, but not all, countries show substantial unusual activity below the threshold; these findings suggest that while countries have made progress toward regulating cryptocurrency exchanges, gaps in enforcement across countries allow for regulatory arbitrage. 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Karen Nershi is a Postdoctoral Fellow at Stanford University's Stanford Internet Observatory and the Center for International Security and Cooperation (CISAC). In the summer of 2021, she completed her Ph.D. in political science at the University of Pennsylvania specializing in the fields of international relations and comparative politics. Through an empirical lens, her research examines questions of international cooperation and regulation within international political economy, including challenges emerging from the adoption of decentralized digital currency and other new technologies. 

Specific topics Dr. Nershi explores in her research include ransomware, cross-national regulation of the cryptocurrency sector, and international cooperation around anti-money laundering enforcement. Her research has been supported by the University of Pennsylvania GAPSA Provost Fellowship for Innovation and the Christopher H. Browne Center for International Politics. 

Before beginning her doctorate, Karen Nershi earned a B.A. in International Studies with honors at the University of Alabama. She lived and studied Arabic in Amman, Jordan and Meknes, Morocco as a Foreign Language and Area Studies Fellow and a Critical Language Scholarship recipient. She also lived and studied in Mannheim, Germany, in addition to interning at the U.S. Consulate General Frankfurt (Frankfurt, Germany).

Dan Bateyko is the Special Projects Manager at the Stanford Internet Observatory.

Dan worked previously as a Research Coordinator for The Center on Privacy & Technology at Georgetown Law, where he investigated Immigration and Customs Enforcement surveillance practices, co-authoring American Dragnet: Data-Drive Deportation in the 21st Century. He has worked at the Berkman Klein Center for Internet & Society, the Dangerous Speech Project, and as a research assistant for Amanda Levendowski, whom he assisted with legal scholarship on facial surveillance.

In 2016, he received a Thomas J. Watson Fellowship. He spent his fellowship year talking with people about digital surveillance and Internet infrastructure in South Korea, China, Malaysia, Germany, Ghana, Russia, and Iceland. His writing has appeared in Georgetown Tech Law Review, Columbia Journalism Review, Dazed Magazine, The Internet Health Report, Council on Foreign Relations' Net Politics, and Global Voices. He is a 2022 Internet Law & Policy Foundry Fellow.

Dan received his Masters of Law & Technology from Georgetown University Law Center (where he received the IAPP Westin Scholar Book Award for excellence in Privacy Law), and his B.A. from Middlebury College.

Karen Nershi
Seminars
Subscribe to Security