Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

News Type
News
Date
Paragraphs

The Freeman Spogli Institute for International Studies (FSI) is pleased to announce that Kelly Born has been named the first executive director of the Cyber Policy Center. With a focus on cybersecurity, disinformation, digital democracy and election security, the Cyber Policy Center’s research, teaching and policy engagement aim to bring new insights and solutions to national governments, international institutions and industry.

As executive director, Born will collaborate with the center’s program leaders to pioneer academic programs focused on cyber issues, including new lines of research, a case-based, policy-oriented curriculum, pre- and postdoctoral training and practitioner fellowships, policy workshops and executive education. Born will also serve as the key spokesperson within the university and externally to the media, policy influencers, industry, foundations and civil society organizations. 

Prior to joining Stanford, Born helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings in America working to reduce polarization and improve U.S. democracy. There, Born designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. In this capacity, Born worked with academics, government leaders, social media companies, foundations, and nonprofits around the world to help improve online information ecosystems. 

Before joining the William and Flora Hewlett Foundation, Born worked as a strategy consultant with the Monitor Group, supporting strategic planning efforts at Fortune 100 companies, governments, and nonprofits in the U.S., Africa, Asia, Latin America and Europe. 

Born earned a master’s degree in international policy from Stanford University. The graduate program is offered through the Freeman Spogli Institute for International Studies.

“We are thrilled that Kelly is returning to Stanford to play a leadership role at the Cyber Policy Center,” said Nathaniel Persily, the center’s faculty co-director, and the James B. McClatchy Professor of Law at Stanford Law School. “Her deep knowledge of our core research areas and strong relationships with leaders in academia, government and technology circles position the center well to achieve its strategic aims.”

The Cyber Policy Center was established in June 2019 and includes four programs: the Program on Democracy and the Internet; the Program on Geopolitics, Technology, and Governance; the Internet Observatory; and the Global Digital Policy Incubator. Together, they focus on addressing the threats cyber technologies pose to security and governance worldwide. 

The center’s launch event, “Securing Our Cyber Future: Innovative Approaches to Digital Threats,” featured the center’s first white paper, Securing American Elections: Prescriptions for Enhancing the Integrity and Independence of the 2020 U.S. Presidential Elections and Beyond,” which was co-authored by scholars affiliated with the Cyber Policy Center. The report details 45 recommendations for protecting the 2020 U.S. presidential election from domestic and foreign interference.

“I am honored and excited to have the opportunity to work with the distinguished faculty and staff at the new Cyber Policy Center, as well as the broader Stanford community of faculty and students,” said Born. “Questions of how best to maximize the benefits and minimize the harms presented by our increasingly networked, online world are amongst the most important and challenging questions global societies are grappling with today. Stanford’s Cyber Policy Center is ideally suited to pursue the research, teaching and policy engagement necessary to help answer these questions.”

About the Cyber Policy Center

The digital age has exposed countries to new security threats and sovereignty challenges that policymakers have only begun to address. In addition, social media and network technologies increasingly strain the balance between protecting the First Amendment and preventing foreign actors from influencing elections. To date, technological advancement in this domain has outpaced government policies, doctrines, or regulations. The Cyber Policy Center at the Freeman Spogli Institute for International Studies at Stanford University aims to address this need through research, policy advocacy and teaching. Program areas address topics including cybersecurity, election security, misinformation, digital democracy and human rights, artificial intelligence, and emerging technologies. Through research, policy engagement and teaching, the Cyber Policy Center brings cutting-edge insights and solutions to national governments, international institutions, and industry.

 
All News button
1
Authors
News Type
News
Date
Paragraphs

As the internet has increasingly been used to weaponize information, governments and technology companies have begun to grapple with new issues surrounding free expression and privacy.

Technology companies are being called upon to reshape their privacy and hate speech policies, and politicians are tackling the possibility  of  tech industry regulation.

Achieving both of those things, according to Eileen Donahoe, executive director of the Global Digital Policy Incubator (GDPI), is easier said than done.

“They all know that they need help,” Donahoe told Freeman Spogli Institute Director Michael McFaul on an episode of the World Class podcast. “Private-sector entities are looking for help from civil society and academics. And governments need help if for no other reason than they don’t always understand what’s going on in the platforms."




Free Speech Dilemma
Facebook and Google both have their own definitions of free speech, their own community values and their own terms of service, which they dictate to their billions of users. But their parameters of free expression are not always aligned with those of the U.S. government, Donahoe said.

“There’s an interplay between the rules of the platforms and the rules of the governments in which they operate, and that’s causing a lot of confusion,” she said. “We’re trying to help develop an appropriate metaphor for what these platforms are — some see themselves as a utility, some see them as editors and media. Whatever metaphor you pick, the rules and responsibilities that flow from it will be different. And we don’t have a metaphor yet.”

Over the last few years, tech companies have begun asking outsiders for help in developing norms for their platforms. Facebook CEO Mark Zuckerberg announced in 2018 that he was developing an “External Oversight Board” to help the company evaluate its community guidelines and for assistance with some of the content-based decisions on its platform.

Some companies are going as far as to call on the government for regulation, she said.

“They recognize that they’re not well suited to develop all of these norms for [their] platforms, which have such gigantic effects on society,” Donahoe said.

To Regulate or Not to Regulate
Several heads of technology companies have testified in front of the U.S. Senate this summer, including Zuckerberg, who answered questions about the company’s new cryptocurrency, and Karan Bhatia, Google's vice president for government affairs and public policy who testified on the question of whether Google’s search engine censors conservative media.

“Techlash” — the growing animosity toward large technology companies — has been on the rise, Donahoe said, and the government isn’t sure what their next steps are in handling these issues with the technology companies yet.

“So many congressional representatives and senators are a bit reticent to jump in,” she said. “They don’t want to undermine free expression, and they don’t want to destroy the American internet industry.”

Europe has already started  tackling this problem with the passage of the General Data Protection Regulation (GDPR), which standardizes data protection laws across all countries in the European Union.

Donahoe said that while she thinks the GDPR is a good move, there have been other laws passed in Europe, such as Germany’s Network Enforcement Act — which puts the liability on social media companies to censor the content on their respective platforms — that undermine free expression and democratic values.

“It shifts what we would normally consider democratic responsibility for assessing criminality to the private sector, and I find that problematic,” Donahoe said. “It’s a dangerous concept — a government is asking platforms to restrict content and be liable in a tort basis for content that is perceived to be harmful…it’s a very slippery slope.”

Related: Watch Eileen Donahoe’s interactive workshop on deep fakes at the June 2019 Copenhagen Democracy Summit

Eileen Donahoe served as the first U.S. ambassador to the United Nations Human Rights Council in Geneva. Follow her at @EileenDonahoe
 

 

Hero Image
eileen donahoe web
Eileen Donahoe, executive director of the Global Digital Policy Incubator, presents at the 2019 Copenhagen Democracy Summit. Photo: Alliance of Democracies
All News button
1
Authors
News Type
News
Date
Paragraphs

Did the Russian-affiliated groups that interfered with the 2016 U.S. presidential election want to be caught?

“There’s a reason why they paid for Facebook ads in rubles,” Nathaniel Persily, who is a senior fellow at FSI and co-director of the Cyber Policy Center, told FSI Director Michael McFaul on the World Class podcast. “They wanted to be open and notorious.”

Since the election, Americans have become more suspicious of fake news, but they have also become suspicious of real news and journalists in general. Another problem with the Russians’ success in influencing the 2016 election, said Persily, is that Americans will automatically assume that the Russians will do the same thing during the 2020 race.

“Everyone is going to be looking for nefarious influences and shouting them from the rooftops, and that actually serves the [bad actors’] purposes just as much,” Persily said. “Many of the attempts in 2016 were about fostering division and doubt, and I think there’s a lot of appetite for doubt right now in America.”

Sign up for the FSI Newsletter to get stories like this delivered straight to your inbox.

Since 2016, Facebook, Twitter and Google have made some important changes to the way they handle advertising, including adding a requirement that all candidate ads and other ads of “national legislative importance” be identified as advertisements on users’ feeds.

But there are no standardized rules or regulations that dictate how tech companies should handle advertisements or posts that contain disinformation, Persily said, and because of this, it is up to those respective companies to make those decisions themselves  — and they aren’t always in agreement. For example, when a video of Nancy Pelosi that was slowed down to make her seem drunk was posted in late May on YouTube and Facebook, YouTube took the video down, but Facebook decided to leave it up.

“The standards that are going to be developed in test cases like these — under conditions which are not as politically incendiary as an election — are going to be the ones that will be rolled out and applied in elections in the U.S. and around the world,” Persily said.

When it comes to election security, the 2020 presidential race will be the next big test for the U.S. government and private-sector companies. But other countries should also be on the lookout for activity from foreign agents and actors in their elections.

“The 2016 election was not just an event, it was a playbook that was written by the Russians,” warned Persily. “That playbook is usable for future elections in the United States as well as around the world, whether it’s between India and Pakistan or China and Taiwan.”

 

Hero Image
world class logo soundcloud notag 1
All News button
1
0
renee-diresta.jpg

Renée DiResta is the former Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

You can see a full list of Renée's writing and speeches on her website: www.reneediresta.com or follow her @noupside.

 

Former Research Manager, Stanford Internet Observatory
News Type
News
Date
Paragraphs

Cyber Initiative grantees and researchers in the news, February 2019

Here is a selection of Cyber Initiative grantee and researcher publications and citations for February 2019:

1-30-2019:  Larry Diamond “Chinese Influence, American Interests” in The Diplomat. 

1-30-19:  Michelle Mello “Stanford’s Michelle Mello on Latest Measles Outbreak” in SLS Blogs.  

1/31/19:  Matthew Gentzkow “How Quitting Facebook Could Change Your Life” in Fortune.  

1/30/29:  Matthew Gentzkow “This is Your Brain Off Facebook” in Health.   

2/3/19:  Herb Lin “Atomic Scientists: Hunanity flirting with annhilation” in Tribune.   

2/4/19:  Matthew Gentzkow “Quitting Facebook makes people happier, study finds” in Irish Examiner.   

2/6/19:  Herb Lin “Add cybersecurity to Doomsday Clock concerns, says Bulletin of Atomic Scientists” in CSO.  

2/6/19:  Herb Lin “Add cybersecurity to Doomsday Clock concerns, says Bulletin of Atomic Scientists” in CIO.  

2/8/19:  Elaine Treharne “Statement on the Hoover Institution” in The Stanford Daily.  

2/13/19:  Michelle Mello “Stanford’s Michael Wald on Vaccinations, Children’s Rights, and the Law” in The Stanford Report.  

2/15/19:  Fei-Fei Li and Elaine Treharne “Human-centered Artificial Intelligence Initiative talks AI, humanities and the arts” in The Stanford Daily.  

2/19/19:  Fei-Fei Li “5 Women advancing AI industry research” in Tech Talks.  

2/19/19:  Fei-Fei Li “10 AI influencers you should be following on Twitter” in Siliconrepublic.com.  

2/22/19 Larry Diamond “Utah Against Health Insurance” in New York Times.  

2/23/19 Sharad Goel “Algorithms Can Decide Pre-Trial Jail” in urbanmilwaukee.  

2/25/19:  Dan Boneh “Zether developers from Stanford aim to add new layer of privacy to Ethereum” in Dapp Life.  

2/26/19:  Susan Athey “Ripple Lead on Questions – Student Seeks Clarification for Promoting XRP Over Bitcoin in Stanford University" in CoinGape.  

2/26/19:  Larry Diamond “George Pyle: Utah’s Medicaid reversal makes us a fool coast-to-coast” in Salt Lake Tribune.  

2/27/19:  Arnold Milstein “AI will not solve health care challenges now, but there are innovative alternatives, researcher writes” in Scope.

2/28/19:  Dan Boneh “New Privacy Protocol Zether Can Conceal Ethereum Transactions” in Blockonomi.  

2/28/19:  Jure Leskovec “Species evolve ways to back up life's machinery” in Phys.org.  

2/28/19: Matthew Gentzkow “What happens when you get off Facebook for four weeks? Stanford researchers found out” in Recode.  

All News button
1
News Type
News
Date
Paragraphs

Midterm elections pose an opportunity for hackers interested in disrupting the democratic process

Voter registration systems provide an additional target for hackers intending to disrupt the US midterm elections; if voting machines themselves are too disperse or too obvious a target, removing voters from the rolls could have a similar effect. in Esquire, Jack Holmes explains that election security experts consider this one of many nightmare scenarios facing the American voting public—and thus, American democracy itself—on the eve of the 2018 midterm elections. (Allison Berke, Executive Director of the Stanford Cyber Initiative, quoted.)

All News button
1
News Type
News
Date
Paragraphs

Conversational software programs might provide patients a less risky environment for discussing mental health, but they come with some risks to privacy or accuracy. Stanford scholars discuss the pros and cons of this trend.

 

Interacting with a machine may seem like a strange and impersonal way to seek mental health care, but advances in technology and artificial intelligence are making that type of engagement more and more a reality. Online sites such as 7 Cups of Tea and Crisis Text Line are providing counseling services via web and text, but this style of treatment has not been widely utilized by hospitals and mental health facilities.

Stanford scholars Adam MinerArnold Milstein and Jeff Hancockexamined the benefits and risks associated with this trend in a Sept. 21 article in the Journal of the American Medical Association. They discuss how technological advances now offer the capability for patients to have personal health discussions with devices like smartphones and digital assistants.

Stanford News Service interviewed Miner, Milstein and Hancock about this trend.

Read more: https://news.stanford.edu/2017/09/25/scholars-discuss-mental-health-technology/

Hero Image
mental health
Conversational software programs are making it possible for people to seek mental health care online and via text, but the risks and benefits need further study, Stanford experts say. (Image credit: roshinio / Getty Images)
All News button
1

Are you interested in cybersecurity? Have you wanted to learn offensive cyber techniques  but don't know where to get started? The Applied Cybersecurity team is hosting an introductory workshop to get people going with practicing exploitation and offensive cyber techniques in an ethical setting. In particular, we will focus on gaining familiarity with techniques used for competing in Capture the Flag (CTF)* competitions. We'll be hosting the first workshop this Friday, in preparation for the Hitcon CTF next week. Bring a laptop! This workshop will assume no prerequisite experience with hacking or cybersecurity so please attend regardless of how unfamiliar you are with the topic. For this workshop, we will focus on web vulnerabilities, binary reversing, and some basic cryptography challenges. Note that experience equivalent to CS107 will be useful. Food will be provided! RSVP here: https://goo.gl/forms/M5yzuQasIZpL4Ovy1

Shriram 366

News Type
News
Date
Paragraphs

The Stanford Applied Cyber Team took 1st place in the Collegiate Penetration Testing Competition (CPTC) Western Regionals.

After 8 hours of intense penetration testing on Saturday, October 7th, at Uber HQ in San Francisco, the Stanford team returned to campus and authored a 52 page findings and remediation report, finishing up at 3AM and then returning to the CPTC competition venue to deliver their recommendations by 8AM Sunday.

Demonstrating moxie and professionalism under fire, the team consisting of Paul Crews, Albert Liang, Kate Stowell, Travis Lanham, Wilson Nguyen, Colleen Dai, and coach Alex Keller have qualified for the CPTC Nationals November 3-5 in Rochester, NY.

 

Hero Image
cptc applied cybersecurity team
All News button
1
-

Drell Lecture Recording: NA

 

 

Drell Lecture Transcript: NA

 

Speaker's Biography: Vinton G. Cerf has served as vice president and chief Internet evangelist for Google since October 2005. He is also an active public face for Google in the Internet world. Cerf was appointed by President Obama to serve on the National Science Board beginning in February 2013.    

Widely known as one of the "Fathers of the Internet," Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. In December 1997, President Clinton presented the U.S. National Medal of Technology to Cerf and his colleague, Robert E. Kahn, for founding and developing the Internet. Kahn and Cerf were named the recipients of the ACM Alan M. Turing award in 2004 for their work on the Internet protocols. The Turing award is sometimes called the “Nobel Prize of Computer Science.” In November 2005, President George Bush awarded Cerf and Kahn the Presidential Medal of Freedom for their work.

Oak Lounge
Tresidder Memorial Union, 2nd Floor
Stanford

Vinton G. Cerf Vice President and Chief Internet Evangelist, Google Speaker
Lectures
Subscribe to Security