Intelligence
Authors
News Type
News
Date
Paragraphs
76 Platforms v. Supreme Court with Daphne Keller (part 1)
All News button
1
Subtitle

Daphne Keller spoke with the Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst about two potentially major cases currently before the Supreme Court

Authors
News Type
Blogs
Date
Paragraphs

Picture this: you are sitting in the kitchen of your home enjoying a drink. As you sip, you scroll through your phone, looking at the news of the day. You text a link to a news article critiquing your government’s stance on the press to a friend who works in media. Your sibling sends you a message on an encrypted service updating you on the details of their upcoming travel plans. You set a reminder on your calendar about a doctor’s appointment, then open your banking app to make sure the payment for this month’s rent was processed.

Everything about this scene is personal. Nothing about it is private.

Without your knowledge or consent, your phone has been infected with spyware. This technology makes it possible for someone to silently watch and taking careful notes about who you are, who you know, and what you’re doing. They see your files, have your contacts, and know the exact route you took home from work on any given day. They can even turn the microphone of your phone on and listen to the conversations you’re having in the room.

This is not some hypothetical, Orwellian drama, but a reality for thousands of people around the world. This kind of technology — once a capability only of the most technologically advanced governments — is now commercially available and for sale from numerous private companies who are known to sell it to state agencies and private actors alike. This total loss of privacy should worry everyone, but for human rights activists and journalists challenging authoritarian powers, it has become a matter of life and death. 

The companies who develop and sell this technology are only passively accountable toward governments at best, and at worse have their tacit support. And it is this lack of regulation that Marietje Schaake, the International Policy Director at the Cyber Policy Center and International Policy Fellow at Stanford HAI, is trying to change.
 

Amsterdam and Tehran: A Tale of Two Elections


Schaake did not begin her professional career with the intention of becoming Europe’s “most wired politician,” as she has frequently been dubbed by the press. In many ways, her step into politics came as something of a surprise, albeit a pleasant one.
 
“I've always been very interested in public service and trying to improve society and the lives of others, but I ran not expecting at all that I would actually get elected,” Schaake confesses.

As a candidate on the 2008 ticket for the Democrats 66 (D66) political party of the Netherlands, Schaake saw herself as someone who could help move the party’s campaign forward, but not as a serious contender in the open party election system. But when her party performed exceptionally well, at the age of 30, Schaake landed in the third position of a 30-person list vying to fill the 25 open seats available for representatives from all political parties in the Netherlands. Having taken a top spot among a field of hundreds of candidates, she found herself on her way to being a Member of the European Parliament (MEP).

Marietje Schaake participates in a panel on human rights and communication technologies as a member of the European Parliament in April 2012. Marietje Schaake participates in a panel on human rights and communication technologies as a member of the European Parliament in April 2012. Alberto Novi, Flikr

In 2009, world events collided with Schaake’s position as a newly-seated MEP. While the democratic elections in the EU were unfolding without incident, 3,000 miles away in Iran, a very different story was unfolding. Following the re-election of Mahmoud Ahmadinejad to a second term as Iran’s president, allegations of fraud and vote tampering were immediately voiced by supporters of former prime minister Mir-Hossein Mousavi, the leading candidate opposing Ahmadinejad. The protests that followed quickly morphed into the Green Movement, one of the largest sustained protest movements in Iran’s history after the Iranian Revolution of 1978 and until the protests against the death of Mahsa Amini began in September 2022.
 
With the protests came an increased wave of state violence against the demonstrators. While repression and intimidation are nothing new to autocratic regimes, in 2009 the proliferation of cell phones in the hands of an increasingly digitally connected population allowed citizens to document human rights abuses firsthand and beam the evidence directly from the streets of Tehran to the rest of the world in real-time.
 
As more and more footage poured in from the situation on the ground, Schaake, with a pre-politics background in human rights and a specific interest in civil rights, took up the case of the Green Movement as one of her first major issues in the European Parliament. She was appointed spokesperson on Iran for her political group. 

Marietje Schaake [second from the left] during a press conference on universal human rights alongside her colleauges from the European Parliament. Marietje Schaake [second from left] alongside her colleauges from the European Parliament during a press conference on universal human rights in 2010. Alberto Novi, Flikr

The Best of Tech and the Worst of Tech


But the more Schaake learned, the clearer it became that the Iranian were not the only ones using technology to stay informed about the protests. Meeting with ights defenders who had escaped from Iran to Eastern Turkey, Schaake was told anecdote after anecdote about how the Islamic Republic’s authorities were using tech to surveil, track, and censor dissenting opinions.
 
Investigations indicated that they were utilizing a technique referred to then as “deep packet inspection,” a system which allows the controller of a communications network to read and block information from going through, alter communications, and collect data about specific individuals. What was more, journalists revealed that many of the systems such regimes were using to perform this type of surveillance had been bought from, and were serviced by, Western companies.
 
For Schaake, this revelation was a turning point of her focus as a politician and the beginning of her journey into the realm of cyber policy and tech regulation.
 
“On the one hand, we were sharing statements urging to respect the human rights of the demonstrators. And then it turned out that European companies were the ones selling this monitoring equipment to the Iranian regime. It became immediately clear to me that if technology was to play a role in enhancing human rights and democracy, we couldn’t simply trust the market to make it so; we needed to have rules,” Schaake explained.

We have to have a line in the sand and a limit to the use of this technology. It’s extremely important, because this is proliferating not only to governments, but also to non-state actors.
Marietje Schaake
International Policy Director at the Cyber Policy Center

The Transatlantic Divide


But who writes the rules? When it comes to tech regulation, there is longstanding unease between the private and public sectors, and a different approach between the East and West shores of the Atlantic. In general, EU member countries favor oversight of the technology sector and have supported legislation like the General Data Protection Regulation (GDPR) and Digital Services Act to protect user privacy and digital human rights. On the other hand, major tech companies — many of them based in North America — favor the doctrine of self-regulation and frequently cite claims to intellectual property or widely-defined protections such as Section 230 as a justification for keeping government oversight at arm’s length. Efforts by governing bodies like the European Union to legislate privacy and transparency requirements are with raised hackles 
 
It’s a feeling Schaake has encountered many times in her work. “When you talk to companies in Silicon Valley, they make it sound as if Europeans are after them and that these regulations are weapons meant to punish them,” she says.
 
But the need to place checks on those with power is rooted in history, not histrionics, says Schaake. Memories of living under the eye of surveillance states such as the Soviet Union and East Germany still are fresh on many European’s minds. The drive to protect privacy is as much about keeping the government in check as it is about reining in the outsized influence and power of private technology companies, Schaake asserts.
 

Big Brother Is Watching


In the last few years, the momentum has begun to shift. 
 
In 2020, a joint reporting effort by The Guardian, The Washington Post, Le Monde, Proceso, and over 80 journalists at a dozen additional news outlets worked in partnership with Amnesty International and Forbidden Stories to publish the Pegasus Project, a detailed report showing that spyware from the private company NSO Group was used to target, track, and retaliate against tens of thousands journalists, activists, civil rights leaders, and even against prominent politicians around the world.
 
This type of surveillance has innovated quickly beyond the network monitoring undertaken by regimes like Iran in the 2000s, and taps into the most personal details of an individual’s device, data, and communications. In the absence of widespread regulation, companies like NSO Group have been able to develop commercial products with capabilities as sophisticated as state intelligence bureaus. In many cases, “no-click” infections are now possible, meaning a device can be targeted and have the spyware installed without the user ever knowing or having any suspicions that they have become a victim of covert surveillance.

Marietje Schaake [left] moderates a panel at the 2023 Summit for Democracy with Neal Mohan, CEO of YouTube; John Scott-Railton, Senior Researcher at Citizen Lab; Avril Haines, U.S. Director of National Intelligence; and Alejandro N. Mayorkas, U.S. Secretary of Homeland Security. Marietje Schaake at the 2023 Summit for Democracy with Neal Mohan, CEO of YouTube; John Scott-Railton, Senior Researcher at Citizen Lab; Avril Haines, U.S. Director of National Intelligence; and Alejandro Mayorkas, U.S. Secretary of Homeland Security. U.S. Department of State

“If we were to create a spectrum of harmful technologies, spyware could easily take the top position,” said Schaake, speaking as the moderator of a panel on “Countering the Misuse of Technology and the Rise of Digital Authoritarianism” at the 2023 Summit for Democracy co-hosted by U.S. President Joe Biden alongside the governments of Costa Rica, the Netherlands, Republic of Korea, and Republic of Zambia.
 
Revelations like those of the Pegasus Project have helped spur what Schaake believes is long-overdue action from the United States on regulating this sector of the tech world. On March 27, 2023, President Biden signed an executive order prohibiting the operational use of commercial spyware products by the United States government. It is the first time such an action has been formally taken in Washington.
 
For Schaake, the order is a “fantastic first step,” but she also cautions that there is still much more that needs to be done. The use of spyware made by the government is not limited by Biden's executive order, and neither is the use by individuals who can get their hands on these tools. 

Human Rights vs. National Security


One of Schaake’s main concerns is the potential for governmental overreach in the pursuit of curtailing the influence of private companies.
 
Schaake explains, “What's interesting is that while the motivation in Europe for this kind of regulation is very much anchored in fundamental rights, in the U.S., what typically moves the needle is a call to national security, or concern for China.”
 
It is important to stay vigilant about how national security can become a justification for curtailing civil liberties. Writing for the Financial Times, Schaake elaborated on the potential conflict of interest the government has in regulating tech more rigorously:
 
“The U.S. government is right to regulate technology companies. But the proposed measures, devised through the prism of national security policy, must also pass the democracy test. After 9/11, the obsession with national security led to warrantless wiretapping and mass data collection. I back moves to curb the outsized power of technology firms large and small. But government power must not be abused.”
 
While Schaake hopes well-established democracies will do more to lead by example, she also acknowledges that the political will to actually step up to do so is often lacking. In principle, countries rooted in the rule of law and the principles of human rights decry the use of surveillance technology beyond their own borders. But in practice, these same governments are also sometimes customers of the surveillance industrial complex. 

It’s up to us to guarantee the upsides of technology and limit its downsides. That’s how we are going to best serve our democracy in this moment.
Marietje Schaake
International Policy Director at the Cyber Policy Center

Schaake has been trying to make that disparity an impossible needle for nations to keep threading. For over a decade, she has called for an end to the surveillance industry and has worked on developing export controls rules for the sale of surveillance technology from Europe to other parts of the world. But while these measures make it harder for non-democratic regimes to purchase these products from the West, the legislation is still limited in its ability to keep European and Western nations from importing spyware systems like Pegasus back into the West. And for as long as that reality remains, it undermines the credibility of the EU and West as a whole, says Schaake. 
 
Speaking at the 2023 Summit for Democracy, Schaake urged policymakers to keep the bigger picture in mind when it comes to the risks of unaccountable, ungoverned spyware industries. “We have to have a line in the sand and a limit to the use of this technology. It’s extremely important, because this is proliferating not only to governments, but also to non-state actors. This is not the world we want to live in.”

 

Building Momentum for the Future


Drawing those lines in the sands is crucial not just for the immediate safety and protection of individuals who have been targeted with spyware but applies to other harms of technology vis-a-vis the long-term health of democracy.

“The narrative that technology is helping people's democratic rights, or access to information, or free speech has been oversold, whereas the need to actually ensure that democratic principles govern technology companies has been underdeveloped,” Schaake argues.

While no longer an active politician, Schaake has not slowed her pace in raising awareness and contributing her expertise to policymakers trying to find ways of threading the digital needle on tech regulation. Working at the Cyber Policy Center at the Freeman Spogli Institute for International Studies (FSI), Schaake has been able to combine her experiences in European politics with her academic work in the United States against the backdrop of Silicon Valley, the home-base for many of the world’s leading technology companies and executives.
 
Though now half a globe away from the European Parliament, Schaake’s original motivations to improve society and people’s lives have not dimmed.

Marietje Schaake speaking at conference at Stanford University Though no longer working in government, Schaake, seen here at a conference on regulating Big Tech hosted by Stanford's Human-Centered Intelligence (HAI), continues to research and advocate for better regulation of technology industries. Midori Yoshimura

“It’s up to us to guarantee the upsides of technology and limit its downsides. That’s how we are going to best serve our democracy in this moment,” she says.
 
Schaake is clear-eyed about the hurdles still ahead on the road to meaningful legislation about tech transparency and human rights in digital spaces. With a highly partisan Congress in the United States and other issues like the war in Ukraine and concerns over China taking center stage, it will take time and effort to build a critical mass of political will to tackle these issues. But Biden’s executive order and the discussion of issues like digital authoritarianism at the Summit for Democracy also give Schaake hope that progress can be made.
 
“The bad news is we're not there yet. The good news is there's a lot of momentum for positive change and improvement, and I feel like people are beginning to understand how much it is needed.”
 
And for anyone ready to jump into the fray and make an impact, Schaake adds a standing invitation: “I’m always happy to grab a coffee and chat. Let’s talk!”



The complete recording of "Countering the Misuse of Technology and the Rise of Digital Authoritarianism," the panel Marietje Schaake moderated at the 2023 Summit for Democracy, is available below.

Read More

All News button
1
Subtitle

A transatlantic background and a decade of experience as a lawmaker in the European Parliament has given Marietje Schaake a unique perspective as a researcher investigating the harms technology is causing to democracy and human rights.

0
Former Research Scholar, Stanford Internet Observatory
riana.jpg

Riana Pfefferkorn was a Research Scholar at the Stanford Internet Observatory. She investigated the U.S. and other governments' policies and practices for forcing decryption and/or influencing the security design of online platforms and services, devices, and products, both via technical means and through the courts and legislatures. Riana also studies novel forms of electronic surveillance and data access by U.S. law enforcement and their impact on civil liberties. 

Previously, Riana was the Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society, where she remains an affiliate. Prior to joining Stanford, she was an associate in the Internet Strategy & Litigation group at the law firm of Wilson Sonsini Goodrich & Rosati, and a law clerk to the Honorable Bruce J. McGiverin of the U.S. District Court for the District of Puerto Rico. During law school, she interned for the Honorable Stephen Reinhardt of the U.S. Court of Appeals for the Ninth Circuit.

Riana has spoken at various legal and security conferences, including Black Hat and DEF CON's Crypto & Privacy Village. She is frequently quoted in the press, including the New York Times, the Washington Post, and NPR. Riana is a graduate of the University of Washington School of Law and Whitman College.

Complete list of publications and recent blog posts here.

Date Label
Authors
Stanford Internet Observatory
News Type
Blogs
Date
Paragraphs

Today the Stanford Internet Observatory published a white paper on GRU online influence operations from 2014 to 2019. The authors conducted this research at the request of the United States Senate Select Committee on Intelligence (SSCI) and began with a data set consisting of social media posts provided to the Committee by Facebook.  Facebook attributed the Pages and posts in this data set to the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (Главное управление Генерального штаба Вооружённых сил Российской Федерации), known as the GU, or by its prior acronym GRU. It removed the content in or before 2018. The data provided by Facebook to SSCI consisted of 28 folders, each corresponding to at least one unique Facebook Page. These Pages were in turn tied to discrete GRU-attributed operations. Some of these Pages and operations were significant; others were so minor they scarcely had any data associated with them at all.

While some content related to these operations has been unearthed by investigative journalists, a substantial amount has not been seen by the public in the context of GRU attribution. The SIO white paper is intended to provide an overview of the GRU tactics used in these operations and to offer key takeaways about the distinct operational clusters observed in the data. Although the initial leads were provided by the Facebook data set, many of these Pages have ties to material that remains accessible on the broader internet, and we have attempted to aggregate and archive that broader expanse of data for public viewing and in service to further academic research.

Several key takeaways appear in the analysis:

  • Traditional narrative laundering operations updated for the internet age. Narrative laundering – the technique of moving a certain narrative from its state-run origins to the wider media ecosystem through the use of aligned publications, “useful idiots,” and, perhaps, witting participants – is an "active-measures" tactic with a long history. In this white paper we show how narrative laundering has been updated for the social-media era. The GRU created think tanks and media outlets to serve as initial content drops, and fabricated personas — fake online identities — to serve as authors. A network of accounts additionally served as distributors, posting the content to platforms such as Twitter and Reddit. In this way, GRU-created content could make its way from a GRU media property to an ideologically aligned real independent media website to Facebook to Reddit — a process designed to reduce skepticism in the original unknown blog.

Image


The website for NBene Group, a GRU-attributed think tank. In one striking example of how this content can spread, an NBene Group piece about the annexation of Crimea was cited in an American military law journal article. 

  • The emergence of a two-pronged approach: narrative and memetic propaganda by different entities belonging to a single state actor. The GRU aimed to achieve influence by feeding its narratives into the wider mass-media ecosystem with the help of think tanks, affiliated websites, and fake personas. This strategy is distinct from that of the Internet Research Agency, which invested primarily in a social-first memetic (i.e., meme-based) approach  to achieve influence, including ad purchases, direct engagement with users on social media, and content crafted specifically with virality in mind. Although the GRU conducted operations on Facebook, it either did not view maximizing social audience engagement as a priority or did not have the wherewithal to do so. To the contrary, it appears to have designed its operation to achieve influence in other ways. 

  • A deeper understanding of hack-and-leak operations. GRU hack-and-leak operations are well known. This tactic — which has been described in detail in the Mueller Report — had a particularly remarkable impact on the 2016 U.S. Election, but the GRU conducted other hack-and-leak operations between 2014 and 2019 as well. One of the salient characteristics of this tactic is the need for a second party (such as Wikileaks, for example) to spread the results of a hack-and-leak operation, since it is not effective to leak hacked documents without having an audience. In this white paper we analyze the GRU’s methods for disseminating the results of its hack-and-leak operations. While its attempts to do so through its own social media accounts were generally ineffective, it did have success in generating media attention (including on RT), which led in turn to wider coverage of the results of these operations. Fancy Bear’s own Facebook posts about its hack-and-leak attack on the World Anti-Doping Agency (WADA), for example, received relatively little engagement, but write-ups in Wired and The Guardian ensured that its operations got wider attention. 

Some of the most noteworthy operations we analyze in this white paper include:

  • Inside Syria Media Center (ISMC), a media entity that was created as part of the Russian government’s multifarious influence operation in support of Syrian President Bashar al-Assad. Although ISMC claimed to be “[c]ollecting information about the Syrian conflict from ground-level sources,” its actual function was to boost Assad and discredit Western forces and allies, including the White Helmets. Our analysis of the ISMC Facebook Page shows exceptionally low engagement — across 5,367 posts the average engagement was 0.1 Likes per post — but ISMC articles achieved wider attention when its numerous author personas (there were six) reposted them on other sites. We counted 142 unique domains that reposted ISMC articles. This process happened quickly; a single article could be reposted on many alternative media sites within days of initial publication on the ISMC website. We observe that, while both Internet Research Agency (IRA) and GRU operations covered Syria, the IRA only rarely linked to the ISMC website.

Image


The Quora profile for Sophie Mangal, one of the personas that authored and distributed ISMC content.

 

  • APT-28, also known as Fancy Bear, is a cyber-espionage group identified by the Special Counsel Investigation as GRU Units 26165 and 74455. This entity has conducted cyber attacks in connection with a number of Russian strategic objectives, including, most famously, the DNC hack of 2016. The Facebook data set provided to SSCI included multiple Pages related to hacking operations, including DCLeaks and Fancy Bears Hack Team, a sports-related Page.  This activity included a hack-and-leak attack on WADA, almost certainly in retaliation for WADA’s recommendation that the International Olympic Committee ban the Russian team from the 2016 Olympics in Rio de Janeiro. The documents leaked (and, according to WADA, altered) by Fancy Bears purported to show that athletes from EU countries and the US were cheating by receiving spurious therapeutic use exemptions. Our analysis of these Pages looks at their sparse engagement on social platforms and the stark contrast to the substantial coverage in mainstream press. It also notes the boosting of such operations by Russian state-linked Twitter accounts, RT, and Sputnik. 

  • CyberBerkut, Committee of Soldiers’ Mothers of Ukraine, and “For an Exit from Ukraine,” a network of Pages targeting Ukraine, which has been subject to an aggressive disinformation campaign by the Russian government since the Euromaidan revolution in 2014. Our investigation of these Pages highlights the degree to which apparently conflicting messages can be harnessed together in support of a single overarching objective. (This also suggests a parallel with the tactics of the IRA, which frequently boosted groups on opposite sides of contentious issues.) Among the multiple, diverging operational vectors we analyzed were attempts to sow disinformation intended to delegitimize the government in Kyiv; to leverage a Ukrainian civil-society group to undermine public confidence in the army; and to convince Ukrainians that their  country was “without a future” and that they were better off emigrating to Poland. While the Pages we analyzed worked with disparate themes, their content was consistently aimed at undermining the government in Kyiv and aggravating tensions between Eastern and Western Ukraine. 

Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect. 

Image

The linked white paper and its conclusions are in part based on the analysis of social-media content that was provided to the authors by the Senate Select Committee on Intelligence under the auspices of the Committee’s Technical Advisory Group, whose Members serve to provide substantive technical and expert advice on topics of importance to ongoing Committee activity and oversight. The findings, interpretations, and conclusions presented herein are those of the authors, and do not necessarily represent the views of the Senate Select Committee on Intelligence or its Membership.

 
All News button
1
0
renee-diresta.jpg

Renée DiResta is the former Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

You can see a full list of Renée's writing and speeches on her website: www.reneediresta.com or follow her @noupside.

 

Former Research Manager, Stanford Internet Observatory
Authors
News Type
News
Date
Paragraphs

The House Permanent Select Committee on Intelligence held a public hearing on Thursday, March 28, 2019, as part of its investigation into Russian influence during and after the 2016 election campaign.

The hearing, "Putin’s Playbook: The Kremlin’s Use of Oligarchs, Money and Intelligence in 2016 and Beyond” included testimony by Michael McFaul, former U.S. Ambassador to Russia and Director of the Freeman Spogli Institute at Stanford University.


Download Complete Testimony (PDF 263 KB)

EXCERPT

To contain and thwart the malicious effects of “Putinism,” the United States government and the American people must first understand the nature of the threat. This testimony focuses onthe nexus of political and economic power within Russia under Putin’s leadership, and how these domestic practices can be used abroad to advance Putin’s foreign policy agenda. Moreover, it is important to underscore that crony capitalism, property rights provided by the state, bribery, and corruption constitute only a few of many different mechanisms used by Putin in his domestic authority and foreign policy abroad.

This testimony proceeds in three parts. Section I describes the evolution of Putin’s system of government at home, focusing in particular on the relationship between the state and big business. Section II illustrates how Putin seeks to export his ideas and practices abroad. Section III focuses on Putin’s specific foreign policy objective of lifting sanctions on Russian individuals and companies.

Watch the C-SPAN recording of the testimony


Media Contact: Ari Chasnoff, Assistant Director for Communications, 650-725-2371, chasnoff@stanford.edu

All News button
1
0
rsd18_083_0009a.jpg

Alex Stamos is a cybersecurity expert, business leader and entrepreneur working to improve the security and safety of the Internet. Stamos was the founding director of the Stanford Internet Observatory at the Cyber Policy Center, a part of the Freeman Spogli Institute for International Studies. He is currently a lecturer, teaching in both the Masters in International Policy Program and in Computer Science.

Prior to joining Stanford, Alex served as the Chief Security Officer of Facebook. In this role, Stamos led a team of engineers, researchers, investigators and analysts charged with understanding and mitigating information security risks to the company and safety risks to the 2.5 billion people on Facebook, Instagram and WhatsApp. During his time at Facebook, he led the company’s investigation into manipulation of the 2016 US election and helped pioneer several successful protections against these new classes of abuse. As a senior executive, Alex represented Facebook and Silicon Valley to regulators, lawmakers and civil society on six continents, and has served as a bridge between the interests of the Internet policy community and the complicated reality of platforms operating at billion-user scale. In April 2017, he co-authored “Information Operations and Facebook”, a highly cited examination of the influence campaign against the US election, which still stands as the most thorough description of the issue by a major technology company.

Before joining Facebook, Alex was the Chief Information Security Officer at Yahoo, rebuilding a storied security team while dealing with multiple assaults by nation-state actors. While at Yahoo, he led the company’s response to the Snowden disclosures by implementing massive cryptographic improvements in his first months. He also represented the company in an open hearing of the US Senate’s Permanent Subcommittee on Investigations.

In 2004, Alex co-founded iSEC Partners, an elite security consultancy known for groundbreaking work in secure software development, embedded and mobile security. As a trusted partner to world’s largest technology firms, Alex coordinated the response to the “Aurora” attacks by the People’s Liberation Army at multiple Silicon Valley firms and led groundbreaking work securing the world’s largest desktop and mobile platforms. During this time, he also served as an expert witness in several notable civil and criminal cases, such as the Google Street View incident and pro bono work for the defendants in Sony vs George Hotz and US vs Aaron Swartz. After the 2010 acquisition of iSEC Partners by NCC Group, Alex formed an experimental R&D division at the combined company, producing five patents.

A noted speaker and writer, he has appeared at the Munich Security Conference, NATO CyCon, Web Summit, DEF CON, CanSecWest and numerous other events. His 2017 keynote at Black Hat was noted for its call for a security industry more representative of the diverse people it serves and the actual risks they face. Throughout his career, Alex has worked toward making security a more representative field and has highlighted the work of diverse technologists as an organizer of the Trustworthy Technology Conference and OURSA.

Alex has been involved with securing the US election system as a contributor to Harvard’s Defending Digital Democracy Project and involved in the academic community as an advisor to Stanford’s Cybersecurity Policy Program and UC Berkeley’s Center for Long-Term Cybersecurity. He is a member of the Aspen Institute’s Cyber Security Task Force, the Bay Area CSO Council and the Council on Foreign Relations. Alex also serves on the advisory board to NATO’s Collective Cybersecurity Center of Excellence in Tallinn, Estonia.

Former Director, Stanford Internet Observatory
Lecturer, Masters in International Policy
Lecturer, Computer Science
Date Label
-

Through the Hack the Pentagon program, The Department of Defense (DoD) had asked Synack to look for vulnerabilities left undetected by traditional security solutions in one of their highly complex and sensitive systems. The DoD was going to push the limits of security beyond that of most enterprises, and the results were surprising. Hear from Synack CEO Jay Kaplan how the government can benefit from bug bounty programs, what Hack the Pentagon revealed about DoD security, and why more and more organizations are employing red team penetration testing. 

Jay Kaplan co-founded Synack after serving in several security-related capacities at the Department of Defense, including the DoD’s Incident Response and Red Team. Prior to founding Synack, Jay was a Senior Cyber Analyst at the National Security Agency (NSA), where his focus was supporting counterterrorism-related intelligence operations. Jay received a BS in Computer Science with a focus in Information Assurance and a MS in Engineering Management from George Washington University studying under a DoD/NSA-sponsored fellowship. Jay holds a number of security certifications from ISC(2) and GIAC.

Encina Hall, E008 (garden level)

Jay Kaplan CEO Synack
Seminars

Are you interested in cybersecurity? Have you wanted to learn offensive cyber techniques  but don't know where to get started? The Applied Cybersecurity team is hosting an introductory workshop to get people going with practicing exploitation and offensive cyber techniques in an ethical setting. In particular, we will focus on gaining familiarity with techniques used for competing in Capture the Flag (CTF)* competitions. We'll be hosting the first workshop this Friday, in preparation for the Hitcon CTF next week. Bring a laptop! This workshop will assume no prerequisite experience with hacking or cybersecurity so please attend regardless of how unfamiliar you are with the topic. For this workshop, we will focus on web vulnerabilities, binary reversing, and some basic cryptography challenges. Note that experience equivalent to CS107 will be useful. Food will be provided! RSVP here: https://goo.gl/forms/M5yzuQasIZpL4Ovy1

Shriram 366

Subscribe to Intelligence