Paragraphs

Bio: Amy Webb

Despite an abundance of technical experts across its agencies, the federal government lacks a centralized office charged with long-range, comprehensive, streamlined planning to address critical science and technology developments. The status quo risks misalignment between agencies and redundant strategic work. At the outset of the next presidential term, the President should create a new, centralized office championing strategic foresight. This will involve leadership in strategic processes using data-driven models to analyze plausible futures, continually evaluating macro sources of change, finding emerging trends, and mapping the trajectory and velocity of changes. Focused on providing authoritative, unbiased insights to the executive branch, it should facilitate forward-leaning research, knowledge dissemination and capabilities building via ongoing strategic conversations, experiential learning, and rigorous quantitative and qualitative proceedings that result in concrete actions. 

All Publications button
1
Publication Type
Working Papers
Publication Date
Authors
-

Vic Baines Vic Baines

Abstract:

Predicting the future is a fool's errand. Or is it? Technology has proved an agent of unprecedented disruption in recent years, but the instinct of some humans to do harm to others remains a constant. Cyber attacks continue to take the global community by surprise, and government actors still have a tendency to describe cybercrime as a new phenomenon. Knowing what we know about criminal modi operandi ​and motivations, can we speculate on the future of cybercrime in a way that enables governments, businesses and citizens to anticipate and prepare for the threats to come? Vic will present her ongoing work to review a past cybersecurity futures exercise, and a new project that aims to see further.

Vic Baines Bio

Downloable Flyer: The Cyber Policy Center Lunch Seminar Series

 

 
Seminars
-

Daphne Keller Daphne Keller
Abstract:

Facebook recently announced its own version of the Supreme Court: a 40-member board that will make final decisions about user posts that Facebook has taken down. The announcement came after extended deliberations that have been described as Facebook’s “constitutional convention.” Sweeping terms such as Supreme Court and constitution are not commonly used to describe the operation of private companies, but here they seem appropriate given the platforms’ importance for the many people who use them in place of newspapers, TV stations, the postal service, and even money. Yet private platforms aren’t really the public square, and internet companies aren’t governments. That’s exactly why they are free to do what so many people seem to want: set aside the First Amendment’s speech rules in favor of new, more restrictive ones. 

Mimicking a few government systems will not make internet platforms adequate substitutes for real governments, subject to real laws and real rights-based constraints on their power. Compared with democratic governments, platforms are far more capable of restricting speech. And they are far less accountable than elected officials for their choices. In this talk, I will delve into the differences we should be considering before urging platforms to take on greater roles as arbiters of speech and information.

Daphne Keller Bio

 

Lunch Seminar Series Flyer
  • E207, Encina Hall
  • 616 Serra Mall, Stanford, CA 94305
 

 

0
top_pick_rsd25_070_0254a.jpg

Daphne Keller is the Director of Platform Regulation at the Stanford Program in Law, Science, & Technology. Her academic, policy, and popular press writing focuses on platform regulation and Internet users'; rights in the U.S., EU, and around the world. Her recent work has focused on platform transparency, data collection for artificial intelligence, interoperability models, and “must-carry” obligations. She has testified before legislatures, courts, and regulatory bodies around the world on topics ranging from the practical realities of content moderation to copyright and data protection. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

SHORT PIECES

 

ACADEMIC PUBLICATIONS

 

POLICY PUBLICATIONS

 

FILINGS

  • U.S. Supreme Court amicus brief on behalf of Francis Fukuyama, NetChoice v. Moody (2024)
  • U.S. Supreme Court amicus brief with ACLU, Gonzalez v. Google (2023)
  • Comment to European Commission on data access under EU Digital Services Act
  • U.S. Senate testimony on platform transparency

 

PUBLICATIONS LIST

Director of Platform Regulation, Stanford Program in Law, Science & Technology (LST)
Social Science Research Scholar
Date Label
Director of Intermediary Liability Center for Internet and Society
Seminars
Paragraphs

In February, the White House attributed “the most destructive and costly cyberattack in history,” a summer 2017 attack affecting critical infrastructure and other victims around the world, to Russian intelligence services. The malicious code used in the attack, known as NotPetya, permanently encrypts the data on the computers that it has infected, essentially destroying them. Ground zero for the malware was Ukraine, but it self-propagated and quickly spread to Asia, Europe and the United States, costing its victims billions of dollars in damage. Russia’s hand in the NotPetya attack ought to send a chill down the spine of anybody who uses products by the Moscow-based antivirus company Kaspersky Labs. Russian law and practice, grants Russian intelligence agencies virtually unfettered authority to compel any internet-facing business in Russia to support their operations.

 

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
Paragraphs

Excerpt from: "Cyber Security Derailed? Recommendations for Smarter Investments in Infrastructure." War on the Rocks. November, 2018. Online.

A state-owned Chinese company receives a contract to build and maintain the next generation of railcars that service Metro stations at the Pentagon, near the White House and Capitol Hill, and throughout the Washington, D.C., metro area. What could possibly go wrong? 

Possibly nothing, but maybe something. Commuter trains have come a long way from the unconnected transit assets that moved through and between cities independently. Modern rail cars are nodes in complex transit communications networks, extensions of a transit authority’s information and operational technology infrastructures, and even WiFi hotspots. Procurement announcements for the next generation of cars, like the one recently issued by D.C.’s Washington Metropolitan Area Transit Authority (WMATA), illustrate the complex, connected technologies that underpin promised improvements in automation, safety, and commuter experience.

 

All Publications button
1
Publication Type
Commentary
Publication Date
Authors
News Type
News
Date
Paragraphs

Marietje Schaake, an outgoing Member of the European Parliament who initiated the net neutrality law now in effect throughout Europe, will be the Cyber Policy Center’s international policy director, and an international policy fellow at the university’s Institute for Human-Centered Artificial Intelligence. 

 

Marietje Schaake standing on train platform

The Freeman Spogli Institute for International Studies (FSI) and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) are pleased to announce that Marietje Schaake has been named to international policy roles in each of their organizations.

At FSI, Schaake will serve as the first international policy director of the Cyber Policy Center. With a focus on cybersecurity, disinformation, digital democracy and election security, the Cyber Policy Center’s research, teaching and policy engagement aims to bring new insights and solutions to national governments, international institutions and industry.

Schaake will also be an international policy fellow at Stanford HAI, which seeks to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition. The university-wide institute is committed to working with industry, governments and civil society organizations that share the goal of a better future for humanity through AI. 

Connecting Cyber Research with the World

As international policy director at the Cyber Policy Center, Schaake will conduct policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she will represent the center to governments, NGOs and the technology industry. 

“Over the course of her career in the European Parliament, Marietje Schaake has distinguished herself as someone who not only has a deep understanding of cyber policy issues, but knows how to enact the appropriate policy-related measures in the real world,” said Nathaniel Persily, the center’s faculty co-director, and the James B. McClatchy Professor of Law at Stanford Law School. “She is a fantastic addition to our growing team of researchers and practitioners from across disciplines, and I can’t wait to welcome her to campus in the fall.” 

In addition to research and policy outreach, Schaake will teach courses on cyber policy, particularly from an international perspective, and bring leaders to Stanford from around the world to discuss cyber policy.  

“Marietje’s extensive experience in politics, with a special focus on cyber policy, will bring a critical perspective to our classrooms,” said Michael McFaul, director of FSI. “Her stellar reputation and track record as a policymaker will be key in building connections between Stanford’s community of students, scholars and relevant policy influencers around the world.” 

At the Forefront of AI Policy and Scholarship

As the inaugural international policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, Schaake will work with faculty to translate research into practical and implementable policy recommendations, and support the institute’s work to partner with AI leaders across sectors.

“AI is a technology that will affect every dimension of human life, and to ensure that its development and deployment is broadly beneficial for humans and society, we need to incorporate global perspectives into our work,” said Rob Reich, HAI associate director and professor of political science. “Marietje played a leading role in establishing the field of cyber policy in Europe, and will contribute enormously to the creation of a community of research, policy and practice focused on addressing the real-world impact of AI. And through her writing and teaching, she can help to shape the future generation of leaders across academia, government, industry and civil society.”

A Career of Policy Impact

Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament, where she was first elected in 2009. 

In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.

“It is an honor to be joining the talented and dedicated teams at FSI and HAI on the Stanford campus,” said Schaake. “I look forward to researching and developing sensible cyber policy recommendations and to continue to bridge the gaps between governments and the technology sector around the world.”

###

About the Cyber Policy Center

The digital age has exposed countries to new security threats and sovereignty challenges that policymakers have only begun to address. In addition, social media and network technologies increasingly strain the balance between protecting freedom of expression and preventing foreign actors from influencing elections. To date, technological advancement in this domain has outpaced government policies, doctrines or regulations. The Cyber Policy Center at the Freeman Spogli Institute for International Studies at Stanford University aims to address this need through research, policy advocacy and teaching. Program areas address topics including cybersecurity, election security, misinformation, digital democracy and human rights, and emerging technologies. Through research, policy engagement and teaching, the Cyber Policy Center brings cutting-edge insights and solutions to national governments, international institutions and industry.

About the Institute for Human-Centered Artificial Intelligence

At Stanford HAI, our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications. We believe AI should be collaborative, augmentative, and enhancing to human productivity and quality of life. Our mission is to advance AI research, education, policy, and practice to improve the human condition. Stanford HAI leverages the university’s strength across all disciplines, including business, economics, education, genomics, law, literature, medicine, neuroscience, philosophy and more. These complement Stanford's tradition of leadership in AI, computer science, engineering and robotics.

Marietje Schaake can be reached by email at mschaake@stanford.edu. Her website is www.marietjeschaake.eu.

Media Inquiries: Mike Sellitto, Deputy Director, Stanford Institute for Human-Centered Artificial Intelligence, shai-press@stanford.edu

 
Hero Image
Marietje Schaake
All News button
1
0
marietje.schaake

Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.


 

Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Date Label
News Type
News
Date
Paragraphs

The Freeman Spogli Institute for International Studies (FSI) is pleased to announce that Kelly Born has been named the first executive director of the Cyber Policy Center. With a focus on cybersecurity, disinformation, digital democracy and election security, the Cyber Policy Center’s research, teaching and policy engagement aim to bring new insights and solutions to national governments, international institutions and industry.

As executive director, Born will collaborate with the center’s program leaders to pioneer academic programs focused on cyber issues, including new lines of research, a case-based, policy-oriented curriculum, pre- and postdoctoral training and practitioner fellowships, policy workshops and executive education. Born will also serve as the key spokesperson within the university and externally to the media, policy influencers, industry, foundations and civil society organizations. 

Prior to joining Stanford, Born helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings in America working to reduce polarization and improve U.S. democracy. There, Born designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. In this capacity, Born worked with academics, government leaders, social media companies, foundations, and nonprofits around the world to help improve online information ecosystems. 

Before joining the William and Flora Hewlett Foundation, Born worked as a strategy consultant with the Monitor Group, supporting strategic planning efforts at Fortune 100 companies, governments, and nonprofits in the U.S., Africa, Asia, Latin America and Europe. 

Born earned a master’s degree in international policy from Stanford University. The graduate program is offered through the Freeman Spogli Institute for International Studies.

“We are thrilled that Kelly is returning to Stanford to play a leadership role at the Cyber Policy Center,” said Nathaniel Persily, the center’s faculty co-director, and the James B. McClatchy Professor of Law at Stanford Law School. “Her deep knowledge of our core research areas and strong relationships with leaders in academia, government and technology circles position the center well to achieve its strategic aims.”

The Cyber Policy Center was established in June 2019 and includes four programs: the Program on Democracy and the Internet; the Program on Geopolitics, Technology, and Governance; the Internet Observatory; and the Global Digital Policy Incubator. Together, they focus on addressing the threats cyber technologies pose to security and governance worldwide. 

The center’s launch event, “Securing Our Cyber Future: Innovative Approaches to Digital Threats,” featured the center’s first white paper, Securing American Elections: Prescriptions for Enhancing the Integrity and Independence of the 2020 U.S. Presidential Elections and Beyond,” which was co-authored by scholars affiliated with the Cyber Policy Center. The report details 45 recommendations for protecting the 2020 U.S. presidential election from domestic and foreign interference.

“I am honored and excited to have the opportunity to work with the distinguished faculty and staff at the new Cyber Policy Center, as well as the broader Stanford community of faculty and students,” said Born. “Questions of how best to maximize the benefits and minimize the harms presented by our increasingly networked, online world are amongst the most important and challenging questions global societies are grappling with today. Stanford’s Cyber Policy Center is ideally suited to pursue the research, teaching and policy engagement necessary to help answer these questions.”

About the Cyber Policy Center

The digital age has exposed countries to new security threats and sovereignty challenges that policymakers have only begun to address. In addition, social media and network technologies increasingly strain the balance between protecting the First Amendment and preventing foreign actors from influencing elections. To date, technological advancement in this domain has outpaced government policies, doctrines, or regulations. The Cyber Policy Center at the Freeman Spogli Institute for International Studies at Stanford University aims to address this need through research, policy advocacy and teaching. Program areas address topics including cybersecurity, election security, misinformation, digital democracy and human rights, artificial intelligence, and emerging technologies. Through research, policy engagement and teaching, the Cyber Policy Center brings cutting-edge insights and solutions to national governments, international institutions, and industry.

 
All News button
1
Authors
News Type
News
Date
Paragraphs

As the internet has increasingly been used to weaponize information, governments and technology companies have begun to grapple with new issues surrounding free expression and privacy.

Technology companies are being called upon to reshape their privacy and hate speech policies, and politicians are tackling the possibility  of  tech industry regulation.

Achieving both of those things, according to Eileen Donahoe, executive director of the Global Digital Policy Incubator (GDPI), is easier said than done.

“They all know that they need help,” Donahoe told Freeman Spogli Institute Director Michael McFaul on an episode of the World Class podcast. “Private-sector entities are looking for help from civil society and academics. And governments need help if for no other reason than they don’t always understand what’s going on in the platforms."




Free Speech Dilemma
Facebook and Google both have their own definitions of free speech, their own community values and their own terms of service, which they dictate to their billions of users. But their parameters of free expression are not always aligned with those of the U.S. government, Donahoe said.

“There’s an interplay between the rules of the platforms and the rules of the governments in which they operate, and that’s causing a lot of confusion,” she said. “We’re trying to help develop an appropriate metaphor for what these platforms are — some see themselves as a utility, some see them as editors and media. Whatever metaphor you pick, the rules and responsibilities that flow from it will be different. And we don’t have a metaphor yet.”

Over the last few years, tech companies have begun asking outsiders for help in developing norms for their platforms. Facebook CEO Mark Zuckerberg announced in 2018 that he was developing an “External Oversight Board” to help the company evaluate its community guidelines and for assistance with some of the content-based decisions on its platform.

Some companies are going as far as to call on the government for regulation, she said.

“They recognize that they’re not well suited to develop all of these norms for [their] platforms, which have such gigantic effects on society,” Donahoe said.

To Regulate or Not to Regulate
Several heads of technology companies have testified in front of the U.S. Senate this summer, including Zuckerberg, who answered questions about the company’s new cryptocurrency, and Karan Bhatia, Google's vice president for government affairs and public policy who testified on the question of whether Google’s search engine censors conservative media.

“Techlash” — the growing animosity toward large technology companies — has been on the rise, Donahoe said, and the government isn’t sure what their next steps are in handling these issues with the technology companies yet.

“So many congressional representatives and senators are a bit reticent to jump in,” she said. “They don’t want to undermine free expression, and they don’t want to destroy the American internet industry.”

Europe has already started  tackling this problem with the passage of the General Data Protection Regulation (GDPR), which standardizes data protection laws across all countries in the European Union.

Donahoe said that while she thinks the GDPR is a good move, there have been other laws passed in Europe, such as Germany’s Network Enforcement Act — which puts the liability on social media companies to censor the content on their respective platforms — that undermine free expression and democratic values.

“It shifts what we would normally consider democratic responsibility for assessing criminality to the private sector, and I find that problematic,” Donahoe said. “It’s a dangerous concept — a government is asking platforms to restrict content and be liable in a tort basis for content that is perceived to be harmful…it’s a very slippery slope.”

Related: Watch Eileen Donahoe’s interactive workshop on deep fakes at the June 2019 Copenhagen Democracy Summit

Eileen Donahoe served as the first U.S. ambassador to the United Nations Human Rights Council in Geneva. Follow her at @EileenDonahoe
 

 

Hero Image
eileen donahoe web
Eileen Donahoe, executive director of the Global Digital Policy Incubator, presents at the 2019 Copenhagen Democracy Summit. Photo: Alliance of Democracies
All News button
1
Authors
News Type
News
Date
Paragraphs

Did the Russian-affiliated groups that interfered with the 2016 U.S. presidential election want to be caught?

“There’s a reason why they paid for Facebook ads in rubles,” Nathaniel Persily, who is a senior fellow at FSI and co-director of the Cyber Policy Center, told FSI Director Michael McFaul on the World Class podcast. “They wanted to be open and notorious.”

Since the election, Americans have become more suspicious of fake news, but they have also become suspicious of real news and journalists in general. Another problem with the Russians’ success in influencing the 2016 election, said Persily, is that Americans will automatically assume that the Russians will do the same thing during the 2020 race.

“Everyone is going to be looking for nefarious influences and shouting them from the rooftops, and that actually serves the [bad actors’] purposes just as much,” Persily said. “Many of the attempts in 2016 were about fostering division and doubt, and I think there’s a lot of appetite for doubt right now in America.”

Sign up for the FSI Newsletter to get stories like this delivered straight to your inbox.

Since 2016, Facebook, Twitter and Google have made some important changes to the way they handle advertising, including adding a requirement that all candidate ads and other ads of “national legislative importance” be identified as advertisements on users’ feeds.

But there are no standardized rules or regulations that dictate how tech companies should handle advertisements or posts that contain disinformation, Persily said, and because of this, it is up to those respective companies to make those decisions themselves  — and they aren’t always in agreement. For example, when a video of Nancy Pelosi that was slowed down to make her seem drunk was posted in late May on YouTube and Facebook, YouTube took the video down, but Facebook decided to leave it up.

“The standards that are going to be developed in test cases like these — under conditions which are not as politically incendiary as an election — are going to be the ones that will be rolled out and applied in elections in the U.S. and around the world,” Persily said.

When it comes to election security, the 2020 presidential race will be the next big test for the U.S. government and private-sector companies. But other countries should also be on the lookout for activity from foreign agents and actors in their elections.

“The 2016 election was not just an event, it was a playbook that was written by the Russians,” warned Persily. “That playbook is usable for future elections in the United States as well as around the world, whether it’s between India and Pakistan or China and Taiwan.”

 

Hero Image
world class logo soundcloud notag 1
All News button
1
Subscribe to The Americas