Cybersecurity
0
rsd18_083_0009a.jpg

Alex Stamos is a cybersecurity expert, business leader and entrepreneur working to improve the security and safety of the Internet. Stamos was the founding director of the Stanford Internet Observatory at the Cyber Policy Center, a part of the Freeman Spogli Institute for International Studies. He is currently a lecturer, teaching in both the Masters in International Policy Program and in Computer Science.

Prior to joining Stanford, Alex served as the Chief Security Officer of Facebook. In this role, Stamos led a team of engineers, researchers, investigators and analysts charged with understanding and mitigating information security risks to the company and safety risks to the 2.5 billion people on Facebook, Instagram and WhatsApp. During his time at Facebook, he led the company’s investigation into manipulation of the 2016 US election and helped pioneer several successful protections against these new classes of abuse. As a senior executive, Alex represented Facebook and Silicon Valley to regulators, lawmakers and civil society on six continents, and has served as a bridge between the interests of the Internet policy community and the complicated reality of platforms operating at billion-user scale. In April 2017, he co-authored “Information Operations and Facebook”, a highly cited examination of the influence campaign against the US election, which still stands as the most thorough description of the issue by a major technology company.

Before joining Facebook, Alex was the Chief Information Security Officer at Yahoo, rebuilding a storied security team while dealing with multiple assaults by nation-state actors. While at Yahoo, he led the company’s response to the Snowden disclosures by implementing massive cryptographic improvements in his first months. He also represented the company in an open hearing of the US Senate’s Permanent Subcommittee on Investigations.

In 2004, Alex co-founded iSEC Partners, an elite security consultancy known for groundbreaking work in secure software development, embedded and mobile security. As a trusted partner to world’s largest technology firms, Alex coordinated the response to the “Aurora” attacks by the People’s Liberation Army at multiple Silicon Valley firms and led groundbreaking work securing the world’s largest desktop and mobile platforms. During this time, he also served as an expert witness in several notable civil and criminal cases, such as the Google Street View incident and pro bono work for the defendants in Sony vs George Hotz and US vs Aaron Swartz. After the 2010 acquisition of iSEC Partners by NCC Group, Alex formed an experimental R&D division at the combined company, producing five patents.

A noted speaker and writer, he has appeared at the Munich Security Conference, NATO CyCon, Web Summit, DEF CON, CanSecWest and numerous other events. His 2017 keynote at Black Hat was noted for its call for a security industry more representative of the diverse people it serves and the actual risks they face. Throughout his career, Alex has worked toward making security a more representative field and has highlighted the work of diverse technologists as an organizer of the Trustworthy Technology Conference and OURSA.

Alex has been involved with securing the US election system as a contributor to Harvard’s Defending Digital Democracy Project and involved in the academic community as an advisor to Stanford’s Cybersecurity Policy Program and UC Berkeley’s Center for Long-Term Cybersecurity. He is a member of the Aspen Institute’s Cyber Security Task Force, the Bay Area CSO Council and the Council on Foreign Relations. Alex also serves on the advisory board to NATO’s Collective Cybersecurity Center of Excellence in Tallinn, Estonia.

Former Director, Stanford Internet Observatory
Lecturer, Masters in International Policy
Lecturer, Computer Science
Date Label
News Type
News
Date
Paragraphs

Conversational software programs might provide patients a less risky environment for discussing mental health, but they come with some risks to privacy or accuracy. Stanford scholars discuss the pros and cons of this trend.

 

Interacting with a machine may seem like a strange and impersonal way to seek mental health care, but advances in technology and artificial intelligence are making that type of engagement more and more a reality. Online sites such as 7 Cups of Tea and Crisis Text Line are providing counseling services via web and text, but this style of treatment has not been widely utilized by hospitals and mental health facilities.

Stanford scholars Adam MinerArnold Milstein and Jeff Hancockexamined the benefits and risks associated with this trend in a Sept. 21 article in the Journal of the American Medical Association. They discuss how technological advances now offer the capability for patients to have personal health discussions with devices like smartphones and digital assistants.

Stanford News Service interviewed Miner, Milstein and Hancock about this trend.

Read more: https://news.stanford.edu/2017/09/25/scholars-discuss-mental-health-technology/

Hero Image
mental health
Conversational software programs are making it possible for people to seek mental health care online and via text, but the risks and benefits need further study, Stanford experts say. (Image credit: roshinio / Getty Images)
All News button
1
Authors
News Type
Blogs
Date
Paragraphs

Facebook and Congress Must Create Regulations Together

Featuring Eileen Donahoe, executive director of the Global Digital Policy Incubator and Allison Berke, executive director of the Stanford Cyber Initiative. Both programs are housed at the Freeman Spogli Institute for International Studies (FSI). Written by Nicole Feldman.

For the past two days, the United States Senate and House of Representatives grilled Facebook CEO Mark Zuckerberg on everything from user privacy to platform bias to Russian interference in the 2016 elections. Though prompted by Cambridge Analytica’s improper use of user data, Zuckerberg’s testimony provided a broader platform to talk about Facebook’s role in today’s increasingly digital world and regulation for the tech industry as a whole. FSI scholars Eileen Donahoe and Allison Berke give us their top take-aways from Zuckerberg’s testimony.

 
Photo of Eileen Donahoe

Eileen Donahoe

 

There were two big “take-aways” from Mark Zuckerberg’s testimony before Congress this week.

Digital privacy is a form of security that matters to Facebook users and to citizens in our democracy.

The good news that came out of the hearings is that the American public and our representatives in Congress are waking up to the importance of citizens’ privacy in our democracy, as well as to the consequences of the loss of privacy for freedom and security. The Cambridge Analytica — Facebook saga has succeeded in bringing to public consciousness a significant security threat to our democracy, which until now has been relatively invisible in public debate: how failure to protect user’s digital privacy can have real world consequences for democratic processes, national security, and citizens’ liberty. Earlier un-nuanced assertions expressed by many in the technology community that “privacy is over” and users don’t care about how their data is shared, can no longer function as a dominant operating assumption. The hard reality ahead of us is how challenging it will be to protect citizens’ privacy in a context where digital platforms, tools and services are intertwined with our daily lives. The bottom line is that digital platforms now will be required to have much more nuanced conversations with their users about the tradeoffs of using free services in exchange for monetizing personal data. This will have consequences for Facebook’s business model and all freemium digital services.

Congressional hearings are not an adequate vehicle for educating legislators about how to regulate digital platforms.

The range of complex, multilayered challenges that must be tackled to optimally govern digital platforms in democracy cannot be addressed effectively through a brief set of public hearings. Many Senators and members of Congress displayed a lack of understanding of how Facebook works, which strands of the debate warrant deeper inspection, or which issues must be prioritized to protect the liberty and security of citizens on digital platforms. Representatives jumped around from one subject to the next — from political bias in restricting content on Facebook, to whether Facebook is a monopoly, to whether citizens own their data, to the efficacy of user consent to terms of service — without adequately framing any of these important subjects. In effect, the Senate and Congressional hearings themselves were shown to be poor vehicles for deepening regulators’ knowledge or helping progress toward an optimal approach to regulating Facebook or other digital platforms. Other than moving toward passage of the bipartisan Honest Ads Act sponsored by Senators Amy Klobuchar (D), Mark Warner (D), and John McCain(R), which regulates political advertising on digital platforms in the same way as on television and radio, our representatives are not yet well-prepared to regulate digital services. A different mode of engagement between government representatives and technology companies must be developed, if legislators want to help protect citizens in the digital realm, while also allowing users to continue to enjoy the benefits of digital platforms they have come to rely upon in their daily lives.

 
Photo of Allison Berke, executive director of the Stanford Cyber Initiative at FSI.

Allison Berke, executive director of the Stanford Cyber Initiative at FSI. Working across disciplines, the Stanford Cyber Initiative aims to understand how technology affects security, governance, and the future of work.

Mark Zuckerberg prepared for his testimony as though expecting to face hostile opposing counsel. His notes — leaked, ironically, by a press photographer when left open on his table during a bathroom break — show prepared language to address calls for his own resignation, and for compensation for users whose data was improperly shared, though these topics were not raised during questioning. Despite promising to work with legislators on regulations, Zuckerberg stopped short of proposing specific measures. Though he voiced his support of the Honest Ads Act, when asked if he would return to Washington to aid its passage, he offered someone on his team instead and noted that he “doesn’t come to Washington too often.” The implications, both that he doesn’t need to and that he doesn’t want to be involved in forming regulations, revealed a relationship between Facebook and lawmakers with distance, shading from incomprehension to distrust to antagonism, on both sides.

Many of those watching the hearings noted the Senators’ and Representatives’ clunky and repetitive lines of questioning, their difficulty choosing the precise terminology to communicate the technological gist of their inquiries, and the inability of a five-minute oral format to properly convey — and convey strictly enough to reign in a witness looking for a question’s easiest possible interpretation — the nuance in, for example, the points made by Senators Blunt and Wicker about Facebook’s cross-platform tracking between a device hosting a logged-in Facebook app and a device registered to the same user but lacking the Facebook login.

One could imagine a more collegial relationship between Facebook and Washington DC, in which representatives would have discussed their questions with Zuckerberg and his team at greater length, and perhaps behind closed doors, and could use the testimonial hearing format to place prior agreements and understandings on the record. Facebook’s apparent openness to exploring regulation should be taken as an opportunity by policymakers, both to craft regulation that may need to be complex — to cover the myriad ways in which data can be collected and mixed, and to ensure that a savvy company can’t avoid both compliance and detection — and to forge a closer relationship between the tech giant and its community representatives. That may require Zuckerberg visiting Washington a little more often, and it will also require the acquisition of more technological knowledge and expertise by legislators and their staff, which may require them to visit Silicon Valley more, too.


Views expressed here do not necessarily represent those of the Freeman Spogli Institute for International Studies or Stanford University, both of which are nonpartisan institutions.

 

Hero Image
zuckerberg
All News button
1
-

The Consequences of Technological Developments for Politics and Government

Tuesday, April 24, 2018


Reception at 5:00pm. Talk from 5:30pm - 6:45pm.

RSVP required online.

The consequences of contemporary technological innovations for the lives and values of future generations are enormous. The wide range of expected – and unexpected – applications require rethinking governance arrangements, legal regimes, economic structures, and social relations. Exploration of such topics is the subject of the 2017-18 CASBS symposium series.

The first symposium, held in November 2017, focused on “AI, Automation, and Society.” Read about and view a video of that event here.

The second symposium, held in March 2018, involved “The Effects of Technology on Human Interactions.” View the event video here.

In this final installment of the 2017-18 series, CASBS presents a conversation featuring two 2017-18 CASBS fellows – Stanford professor Nate Persily, an expert on law, democracy, and the internet; and Carrie Cihak, a senior policy expert and practitioner at one of the most innovative county governments in the U.S. They will outline the challenges that recent technology-based advances pose to democracy, public policy, and governance systems. Social media platforms increasingly are viewed as vehicles for exploiting political discourse, rather than as democratizing forces. How should our institutions respond? Though modern technological innovations more easily connect people, what are the implications for issues of “digital equity,” government capacity, and regulatory frameworks? Though the positive impacts are substantial, how do we address the numerous negative impacts of the technology sector’s concentration in certain regional economies – including the San Francisco Bay Area and the greater Seattle area? These are just a few questions that will stimulate a thought-provoking discussion between the panelists and with the audience.

 


 

As Chief of Policy for King County Executive Dow Constantine, the highest ranking elected official of King County, WA, the 13th largest county in the United States, Carrie S. Cihak is responsible for identifying the highest priority policy areas and community outcomes for leadership focus and for developing and launching innovative solutions to issues that are complex, controversial and cross-sectoral. She is an architect of some of the county’s key initiatives, such as Best Starts for Kids as well as nationally-recognized work on equity and social justice. Prior to her work in Constantine’s administration, Cihak served for eight years as a senior-level analyst for the King County Council and as lead staff for the King County Board of Health. She also served as a staff economist on international trade and finance for President Clinton's Council of Economic Advisers. As a policy fellow during the 2017-18 academic year, Cihak is leading projects at CASBS and in King County that advance meaningful collaboration between academic researchers and governments. She is spearheading efforts in King County on evidence-informed decision making and is co-director of CASBS’s Impact Evaluation Design Lab, launched in March 2018. She is also using time at CASBS to explore the science and evidence-base of belonging, while working back home to help launch a cross-sector partnership called “You Belong Here,” which seeks to build civic muscle and inclusive growth in the Seattle region.


Nate Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of political science, communication and the Freeman Spogli Institute for International Studies. Prior to joining Stanford, Persily taught at Columbia University and the University of Pennsylvania Law School, and as a visiting professor at Harvard, NYU, Princeton, the University of Amsterdam, and the University of Melbourne. His scholarship and legal practice focus on American election law or what is sometimes called the “law of democracy,” which addresses issues such as voting rights, political parties, campaign finance, redistricting, and election administration. He has served as a special master or court-appointed expert to craft congressional or legislative districting plans for Georgia, Maryland, Connecticut, New York and, most recently, North Carolina. He also served as the Senior Research Director for the Presidential Commission on Election Administration. In addition to numerous articles (many cited by the Supreme Court) on the legal regulation of political parties, issues surrounding the census and redistricting process, voting rights, and campaign finance reform, Persily is coauthor of an election law casebook, The Law of Democracy. As a fellow at CASBS supported by the Annenberg Foundation, he is examining the impact of changing technology on political communication, campaigns, and election administration. In 2016, he received an Andrew Carnegie Fellowship to pursue this work. Persily also co-directs the Stanford Project on Democracy and the Internet.

 

*There will be valet parking at the event.

Center for Advanced Study in the Behavioral Sciences at Stanford University
75 Alta Road
Stanford, CA 94305

Nate Persily The James B. McClatchy Professor of Law Stanford Law School
Carrie Cihak Chief of Policy for King County Executive Dow Constantine King County, Washington
Lectures
-
Please join us for an informal discussion on with Marina Kaljurand and Elaine Korzak:
 
Monday February 26th
3:30-4:30pm
Encina E207 (Reuben Hills Conference Room)
 
This discussion will cover how international norms online can promote democratic values and the rule of law, with experience from the Global Commission on the Stability of Cyberspace (GCSC). The Global Commission on the Stability of Cyberspace (GCSC) is helping to promote mutual awareness and understanding among the various cyberspace communities working on issues related to international cybersecurity.
 
Marina Kaljurand served as Estonian Foreign Minister from 2015 July – 2016 October. She has also been appointed as Ambassador of Estonia to several countries, including the United States of America, the Russian Federation, the State of Israel, Mexico, and Canada. Ms. Kaljurand has been appointed twice to serve as the Estonian National Expert at the United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security.
 
Elaine Korzak is currently a cybersecurity fellow at the Middlebury Institute of International Studies at Monterey and an affiliate at the Center for International Security and Cooperation (CISAC) at Stanford University. Dr. Korzak was previously a fellow at the Hoover Institution and at CISAC (both Stanford University). Her research focuses on international legal and policy aspects of cybersecurity. Her doctoral thesis on the international norms debate examined the cybersecurity discussions at the United Nations and the positions of the United States, the United Kingdom, Russia and China in this process. Her other research interests include cybersecurity and export control regimes as well as cyber capacity-building. She holds a PhD from the Department of War Studies at King’s College London, a Master of Laws in public international law from the London School of Economics and a Master’s degree in international peace and security from King’s College London. Her professional experience includes NATO’s Cyber Defence Section as well as the European Commission’s Directorate-General on Information Society and Media.

Encina Hall E207

Elaine Korzak Cybersecurity Fellow Middlebury Institute of International Studies at Monterey
Marina Kaljurand Chair Global Commission on the Stability of Cyberspace
Authors
News Type
News
Date
Paragraphs

Cyber Initiative Fellow Jonathan Mayer publishes law review article on government hacking

The United States government hacks computer systems for law enforcement purposes. This Article provides the first comprehensive examination of how federal law regulates government malware, and argues that government hacking is inherently a Fourth Amendment search. Noted privacy scholar and Cyber Initiative fellow Jonathan Mayer explores the legal questions behind government use of hacking tools. 

Read the article at https://www.yalelawjournal.org/article/government-hacking

Hero Image
profile mayerlede1838
All News button
1
0
top_pick_rsd25_070_0254a.jpg

Daphne Keller is the Director of Platform Regulation at the Stanford Program in Law, Science, & Technology. Her academic, policy, and popular press writing focuses on platform regulation and Internet users'; rights in the U.S., EU, and around the world. Her recent work has focused on platform transparency, data collection for artificial intelligence, interoperability models, and “must-carry” obligations. She has testified before legislatures, courts, and regulatory bodies around the world on topics ranging from the practical realities of content moderation to copyright and data protection. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

SHORT PIECES

 

ACADEMIC PUBLICATIONS

 

POLICY PUBLICATIONS

 

FILINGS

  • U.S. Supreme Court amicus brief on behalf of Francis Fukuyama, NetChoice v. Moody (2024)
  • U.S. Supreme Court amicus brief with ACLU, Gonzalez v. Google (2023)
  • Comment to European Commission on data access under EU Digital Services Act
  • U.S. Senate testimony on platform transparency

 

PUBLICATIONS LIST

Director of Platform Regulation, Stanford Program in Law, Science & Technology (LST)
Social Science Research Scholar
Date Label
0
Senior Fellow at the Freeman Spogli Institute for International Studies
Professor of Communication
Professor, by courtesy, of Political Science and of Sociology
Stanford Affiliate at the Stanford Center on China's Economy and Institutions
Stanford Affiliate at the Tech Impact and Policy Center
jenniferpan-20220922-055.jpg PhD

Jennifer Pan is a Professor of Communication and a Senior Fellow at the Freeman Spogli Institute at Stanford University. Her research focuses on political communication and authoritarian politics. Pan uses experimental and computational methods with large-scale datasets on political activity in China and other authoritarian regimes to answer questions about how autocrats perpetuate their rule. How political censorship, propaganda, and information manipulation work in the digital age. How preferences and behaviors are shaped as a result.

Her book, Welfare for Autocrats: How Social Assistance in China Cares for its Rulers (Oxford, 2020) shows how China's pursuit of political order transformed the country’s main social assistance program, Dibao, for repressive purposes. Her work has appeared in peer reviewed publications such as the American Political Science Review, American Journal of Political Science, Comparative Political Studies, Journal of Politics, and Science.

She graduated from Princeton University, summa cum laude, and received her Ph.D. from Harvard University’s Department of Government.

Date Label
0
top_pick_rsd25_070_0472a.jpg PhD

Jeff Hancock is founding director of the Center and well-known for his research on how people use deception with technology, from sending texts and emails to detecting fake online reviews.  He is the Harry and Norman Chandler Professor of Communication at Stanford University, Founding Director of the Stanford Social Media Lab, a senior fellow at the Freeman Spogli Institute (FSI), Founding Editor of the Journal of Online Trust & Safety, and previously the co-director of the Stanford Cyber Policy Center.

A leading expert in behavioral sciences and the psychology of online interaction, Professor Hancock studies the psychological aspects of social media and AI technology. Professor Hancock earned his PhD in Psychology at Dalhousie University, Canada and was Professor of Information Science and Communication at Cornell prior to joining Stanford in 2015.  

Director, Stanford Social Media Lab
Director, Stanford Tech Impact and Policy Center
Senior Fellow at the Freeman Spogli Institute for International Studies
Institute Faculty, Freeman Spogli Institute for International Studies
Date Label
Authors
News Type
News
Date
Paragraphs

Stanford University took first place in the annual National Collegiate Pentesting Competition hosted Nov. 3–5 at RIT. In the competition, teams from 10 national universities were charged with attacking and analyzing a computer network.

The nation’s brightest cybersecurity college students traveled to Rochester, N.Y., to test their hacking skills in the annual National Collegiate Penetration Testing Competition. The Nov. 3–5 event allowed students to learn about penetration testing and offensive cybersecurity rather than defensive, which is the focus of the CCDC cyber defense competition. Teams from 10 national universities faced-off at RIT's B. Thomas Golisano College of Computing and Information Sciences, breaking into fabricated computer networks, evaluating their weak points and presenting plans to better secure them.

The event is sponsored by Uber, Crowe Horwath and IBM Security.

Stanford University took home the top trophy in the competition. The top teams were noted for their ability to verbalize, document in a written format and present their findings clearly to an audience that has a variety of technical understanding.

Hero Image
screen shot 2017 11 07 at 3 27 19 pm
All News button
1
Subscribe to Cybersecurity