FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.
Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions.
This fourth workshop of the Stanford Internet Observatory's End-to-End Encryption series will focus on civil society concerns in an encrypted world. We will focus on strategies and tradeoffs to make encrypted platforms safer without compromising security.
Login details will be provided to registered participants. This event was originally scheduled for November 16 and has been moved to December 7. Registration link forthcoming.
AS WE APPROACH THE 2020 ELECTION IN THE UNITED STATES, content moderation on social media platforms is taking center stage. From speech issues on Facebook and Twitter to YouTube videos and TikTok brigands, the current election season is being reshaped by curation concerns about what’s allowed online, what’s not, upranking and downranking, and who’s deciding.
Trust in the content is another major challenge as conspiracies and mis- and disinformation go viral. With billions of pieces of content posted every day, what balance should be struck between automated and human moderation? Are AI and machine learning to blame when companies miss content they promised to remove, or do we need to look to human content moderators and those sitting in their board rooms?
THE EMERGENCE OF A DIGITAL SPHERE where public debate takes place raises profound questions about the connection between online information and polarization, echo chambers, and filter bubbles. Does the information ecosystem created by social media companies support the conditions necessary for a healthy democracy? Is it different from other media? These are particularly urgent questions as the United States approaches a contentious 2020 election during the COVID-19 pandemic.
The influence of technology and AI-curated information on America’s democratic process is being examined in the eight-week Stanford University course, “Technology and the 2020 Election: How Silicon Valley Technologies Affect Elections and Shape Democracy.” This issue brief focuses on the class session on “Echo Chambers, Filter Bubbles, and Polarization,” with guest experts Joan Donovan and Joshua Tucker.
Since the 2016 election, great attention has been paid to the impact of digital technologies on democracy in the United States and around the world. Foreign intervention into the U.S. campaign through social media and online advertising, including the rise of "fake news" and computational propaganda, exacerbated concerns that new technologies posed a substantial threat to the normal workings of the U.S. electoral process. These concerns remain for 2020, alongside new threats related to the COVID-19 pandemic. With in-person campaigning, voter mobilization, and even voting itself hindered by the pandemic, digital technologies promise to play an even more important role in 2020.
On December 10, 2020, the Stanford Cyber Policy Center will bring together scholars, tech platforms, principals from the digital campaigns, journalists, and other key experts to explore the effect of digital technologies on the 2020 Election. The conference will explore the role of digital technologies on election administration, campaign tactics, political advertising, the news media, foreign propaganda efforts, and the broader campaign information ecosystem. It will also consider how changes in platform policies affected the campaign and information environment, and whether lessons learned in the 2020 elections suggest that further changes are warranted.
9AM: Introduction: Kelly Born, Stanford Cyber Policy Center
9:10: Findings from the Stanford/MIT Healthy Elections Project: Nate Persily, Stanford Law School
9:30:Trends in 2020 Political Advertising: Erika Franklin Fowler, Wesleyan Media Project
9:50:Online Political Transparency: Laura Edelson, New York University Political Ads Project
10:10: BREAK
10:20: Center for Technology and Civic Life’s Elections Project: Tiana Epps-Johnson, CTCL
10:40:Platform Speech Policies and the Elections: Daniel Kreiss and Bridget Barrett, University of North Carolina at Chapel Hill
11:00:Findings from the Election Integrity Project: Alex Stamos and Renee DiResta, Stanford Internet Observatory
11:20: BREAK
11:30:Experiences from the Platforms
Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook
Clement Wolf, Global Public Policy Lead for Information Integrity at Google
On October 8, 2020, Twitter announced the takedown of an operation it attributed to Iran. The actors compromised real accounts to tweet Black Lives Matter content, and additionally created fake accounts with bios stolen from other real accounts on Twitter. The Stanford Internet Observatory analyzed the accounts’ behaviors, tweets and images related to this relatively small operation. The activity observed in this dataset — compromising Twitter accounts, then leveraging them to disseminate messaging — appears to be a bit of a departure from prior Iran-linked activity. As we will discuss, the effort encompassed in this set contained unrefined messaging and ineffective dissemination. Other Iran-linked malign actors involved in prior influence operations appear to have been far more adept at creating fake social media personas for the purpose of disseminating propaganda. Topically, as SIO has previously noted, verified accounts run by Iranian regime leaders and its state media have previously waded into the Black Lives Matter conversation, posting support for protestors, portraying American police as fascists, and declaring that the US government is guilty of human rights violations and racism.
Key Takeaways
In total, 104 accounts were utilized in the Iran-linked operation. Of this dataset, 81 accounts were real accounts that had been hacked for the purposes of the operation. The remaining 23 accounts were fake accounts that Twitter assessed were created by the malign actor, and incorporated elements of theft such as bios stolen from real accounts.
The compromised accounts were hacked to tweet content about Black Lives Matter, using the hashtag #black_lives_matter. These tweets contained images or memes to advance a pro-BLM narrative.
Tweets from the fake accounts were broader in focus, and covered multiple topics. These accounts tweeted in English and Arabic. A subset of English tweets by accounts claiming to be journalists shared news articles that were more critical of Donald Trump but also retweeted the US President’s account. Tweets in Arabic focused on two individuals critical of the Kuwaiti government, alleging they abused or trafficked drugs.
Tactics, Techniques, and Procedures
There were two distinct tactics observed in the dataset: hacking real accounts, and creating fake personas with stolen biographies.
The majority of accounts, 81, fell into the first category: account theft. The compromised accounts sent all but two of their tweets on June 3, 2020, that consisted of the hashtag #black_lives_matter and an image, such as an alleged Black Lives Matter protest. The hacked accounts were primarily located in the United States and came from a variety of communities: there were DJs, gamers, and accounts that role-played as vampires and werewolves. We observed tweets, Facebook posts and website updates from accounts that were compromised, some of which noted that they had been hacked, and others stating that they had regained control of their accounts. We do not name these accounts for privacy reasons.
The second category centered around 23 fake personas created by the malign actor. This subset of accounts followed similar naming conventions: a first name followed by a last name or last initial and a series of numbers. Of the accounts, 22 of 23 were created on one of three dates in January 2020 — January 8, January 11, and January 25 — and each batch had similarities in its location and bio profession. For example, eight accounts created on Jan 25, 2020, had bios claiming to be journalists. The final account, which shared the same naming convention, was created on January 22. Unlike the other accounts, this account’s bio was in Arabic and it tweeted mostly in Arabic.
Figure 1: Account creation dates (aggregated by month) for all users in the dataset, both hacked and fake. The graph shows a spike in user creations in January 2020, when the adversary created its fake persona accounts.
The majority of the fake accounts stole their bios from real accounts on Twitter, most of which were from users located in the United Kingdom. The bios from real accounts ranged from those of government officials, to a primary school teacher, to TV presenters and journalists. A majority of the real accounts that bios were stolen from had large followings (the largest had 508,800 followers), though some were relatively small (101 followers). It is unclear why these individuals were selected.
The most active fake persona, Jennife55580973, and other accounts to a lesser extent, tweeted extensively for accounts to “please follow me back.” This behavior suggests this cluster was in the early phase of network building. The accounts mentioned each other in their tweets, creating a retweet ring of fake journalist personas that we discuss in more detail below.
Figure 2: The network of interactions (retweets, replies and mentions) initiated by the fake persona accounts (filtered to remove single-instance activity). This graph demonstrates the interconnectedness of the fake account network, while also showing that the accounts branched out to prominent figures such as @realdonaldtrump.
Themes
#black_lives_matter
The 81 hacked accounts in the dataset were very minimally utilized: they tweeted the hashtag “#black_lives_matter,” along with an image. There were several variants of George Floyd's face edited to include an overlay of Joaquin Phoenix-style Joker makeup, which we have elected not to include. This analogy may have been meant to show support for protesters, or to encourage a more violent revolution as was depicted in the 2019 movie Joker; the purpose was somewhat unclear given the limited text. Other images shared in the tweets suggested a pro-BLM narrative, such as an image of Martin Luther King Jr. with the text, “even viruses know we are all made the same: STOP RACISM”.
Figure 3: Examples of additional content sent through the compromised accounts. Left: An image of what seems to be a Black Lives Matter protest, with the phrase “I can not breathe,” a slight variant from the more commonly used phrase “I can’t breathe.” Right: An image of Martin Luther King Jr. and “even viruses know we are all made the same,” an anti-racism and pro-Black Lives Matter message.
A Ring of Fake Journalists
Among the fake accounts created by the Iran-affiliated entity, eight personas claimed to be journalists. The narrative focus of the journalist ring differed based on the language of the tweets. The majority of tweets from this network were in English; a small amount were in Arabic and Urdu.
English tweets
The tweets in English shared links to articles from CNN, the New York Times and the Wall Street Journal; the text of the tweets was the opening of the article (or copied from the news outlet’s own tweet about the article). The English tweets from this journalist ring were substantively different from the tweets from the “non-journalist” fake accounts, which didn’t share article links or focus primarily on events in the news.
English tweets from the fake journalist accounts did not seem to center around a single dominant narrative; the accounts tweeted about global political events, COVID-19, President Donald Trump and George Floyd. The articles shared by the fake accounts were usually critical of Trump. For example, one account shared an article from CNN about how the governor of Illinois had labeled President Trump “a miserable failure.” At the same time, the accounts also retweeted a small but notable amount of tweets from sources that tend to be more favorable to Donald Trump, such as Candace Owens, Charlie Kirk and Donald Trump himself.
The narratives incorporating Black Lives Matter and George Floyd were pro-BLM. For example, one account shared a CNN article and quote from Michelle Obama about the George Floyd protests:
“Race and racism is a reality that so many of us grow up learning to just deal with. But if we ever hope to move past it, it can’t just be on people of color to deal with it,” former first lady Michelle Obama said while speaking out on George Floyd’s deathhttps://t.co/B3ZVUa0fL3
Another account copied CNN’s tweet that claimed GOP senators had asked the President to take a “far more compassionate approach amid the deep unrest” after George Floyd’s death.
Arabic tweets
The content in Arabic from the fake journalists were mostly retweets of tweets critical of two individuals: Hani Hussein (هاني حسين), a former Kuwaiti oil minister who resigned in 2013 due to tensions with Parliament, and Abdul Hamid Dashti (عبدالحميد دشتي), a Shiite former Kuwaiti MP who was sentenced in absentia in 2016 and 2017 for remarks and tweets insulting Saudi Arabia and Bahrain. The targeting of these two individuals was not exclusive to the fake journalist accounts — all 23 fake accounts posted tweets in Arabic about the two individuals. The tweets aimed to paint Hussein and Dashti in a negative light. For example, the tweets spread rumors that Hani Hussein was abusing drugs, and referred to Abdul Hamid Dashti as a mercenary and a degenerate thief. Similarly, there was a subset of copypasta tweets — tweets that shared verbatim text — from the fake accounts in this dataset responding to real accounts on Twitter with tweets critical of Dashti, as seen in the tweet below.
Image
Tweet Reply Translation: “The first residency dealer in Kuwait is the mercenary Abdul Hamid Dashti and his son Talal ‘Al-Nibras.’ This is a letter from the Iranian embassy to the Kuwaiti Ministry of Interior complaining about his trafficking in residences. The funny thing is that this degenerate thief, at the behest of the son of the tanker thief Khalifa, who stole Kuwait during the invasion, looks up to us! https://t.co/wyACCtdkUo”
The image in the tweet is allegedly a letter sent from the Iranian Embassy to the Kuwaiti Ministry of the Interior complaining that Dashti was ‘trafficking’ in government-funded residences, though the tweet did not specify what specifically he was trafficking. Some of these tweets were retweets of politicians in Pakistan and Kuwait, such as Dr. Basel Al-Sabah, Kuwait’s Minister of Health, which tweeted comments such as defending the state’s response to the pandemic in the face of “malicious rumors and propaganda.”
Given their scattered focus, it is unclear from the content what the adversary’s intended purpose was for the fake accounts.
Conclusion
Overall, this was a relatively small network in the early stages of its activity that was detected and removed before it had a chance to have a significant impact. Given Iranian-affiliated actors’ prior willingness to overtly leverage #BLM hashtags to denigrate American society and political leaders, it is somewhat surprising to see an Iran-linked adversary doing the work to compromise accounts to simply use them to send out a handful of #black_lives_matter tweets. While the narratives may not have been singularly focused across both the hacked and fake accounts, this operation provides researchers more insight into the different tactics and strategies leveraged to weigh in on political conversations and narratives on Twitter.
In this post and in the attached report we investigate a U.S. domestic astroturfing operation that Facebook attributed to social media consultancy Rally Forge. The use of marketing agencies and social consultancies to carry out influence operations has become quite common now, worldwide. Hiring an agency may afford the client plausible deniability in the event of discovery. Rally Forge served a range of clients including Turning Point Action and Inclusive Conservation Group. In September 2019 it was implicated in an operation uncovered by the Washington Post, in which teenagers appeared to be posting comments using fake accounts. Twitter and Facebook each took down a subset of the accounts immediately, and Facebook opened an investigation. This report provides an assessment of content taken down as a result of that investigation.
Key takeaways
Rally Forge-linked accounts engaged in astroturfing operations on multiple platforms, posting “vox populi” comments about hunting or politics that appeared grassroots but was in fact paid commentary, much of it from people who did not exist.
The fake accounts were operated over a period of several years, with a period of dormancy that appeared to coincide with the end of the 2018 election cycle. These fake accounts occasionally pivoted in their expressed political beliefs and topical focus.
Most of the Rally Forge-linked Page audiences were small, and comments that its personas left did not appear to generate much response. However, several of its Pages did achieve significant reach at their peak.
Examples of content and replies from the hunting-advocacy astroturfing operation carried out by the network.
While there are bright lines when it comes to foreign influence operations, policies are fuzzier when considering U.S.-based actors, particularly as networked activism tactics are used by an increasing variety of domestic political and issue-based advocacy groups. In this case, the vast majority of the content that Facebook attributed to the Rally Forge network consisted of fairly standard political and issue-based advocacy work. However, there was additionally extensive inauthenticity in the form of fake accounts, which attempted to manipulate the public by way of astroturfed comment activity.
The Rally Forge network across Facebook, Instagram, and Twitter. Twitter is the upper left, Instagram lower right, and Facebook the smaller cluster between them. Large nodes are individual actors in the network, and the small nodes surrounding them are “interests”—Pages and accounts that they follow. Accounts are increasingly likely to be “real” as they stray from the center of the clusters and have additional diverse interests. Subcommunities of the 3 major social networks, represented with different color, are inferred by modularity. For example, the two darkest colored clusters on the lower right are of International Conservation Group leadership.
An astroturfing operation involving fake accounts (some with AI-generated images) that left thousands of comments on Facebook, Twitter, and Instagram. Clients included Turning Point Action and Inclusive Conservation Group, a pro-hunting organization.
THE 2020 ELECTION IN THE UNITED STATES will take place on November 3 in the midst of a global pandemic, economic downturn, social unrest, political polarization, and a sudden shift in the balance of power in the U.S Supreme Court. On top of these issues, the technological layer impacting the public debate, as well as the electoral process itself, may well determine the election outcome. The eight-week Stanford University course, “Technology and the 2020 Election: How Silicon Valley Technologies Affect Elections and Shape Democracy,” examines the influence of technology on America’s democratic process, revealing how digital technologies are shaping the public debate and the election.
The eight-week Stanford University course, “Technology and the 2020 Election: How Silicon Valley Technologies Affect Elections and Shape Democracy,” examines the influence of technology on America’s democratic process, revealing how digital technologies are shaping the public debate and the election...
The US 2020 elections have been fraught with challenges, including the rise of "fake news” and threats of foreign intervention emerging after 2016, ongoing concerns of racially-targeted disinformation, and new threats related to the COVID-19 pandemic. Digital technologies will have played a more important role in the 2020 elections than ever before.
On November 4th at 10am PST, join the team at the Stanford Cyber Policy Center, in collaboration with the Freeman Spogli Institute, Stanford’s Institute for Human-Centered Artificial Intelligence, and the Stanford Center on Philanthropy and Civil Society, for a day-after discussion of the role of digital technologies in the 2020 Elections. Speakers will include Nathaniel Persily, faculty co-director of the Cyber Policy Center and Director of the Program on Democracy and the Internet, Marietje Schaake, the Center’s International Policy Director and International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, Alex Stamos, Director of the Cyber Center’s Internet Observatory and former Chief Security Officer at Facebook and Yahoo, Renee DiResta, Research Manager at the Internet Observatory, Andrew Grotto, Director of the Center’s Program on Geopolitics, Technology, and Governance, and Rob Reich, Faculty Director of the Center for Ethics in Society, in conversation with Kelly Born, the Center’s Executive Director.
Please note that we will also have a YouTube livestream available for potential overflow or for anyone having issues connecting via Zoom: https://youtu.be/H2k62-JCAgE
Renée DiResta is the former Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.
You can see a full list of Renée's writing and speeches on her website: www.reneediresta.com or follow her @noupside.
Former Research Manager, Stanford Internet Observatory
Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.
Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Please join the Cyber Policy Center, Wednesday, October 21, from 10 a.m. –11 a.m. pacific time, with host Marietje Schaake, International Policy Director of the Cyber Policy Center, in conversation with Dmitry Grozoubinski, founder of ExplainTrade.com, and visiting professor at University of Strathclyde, along with Anu Bradford, Henry L. Moses Professor of Law and International Organizations at Columbia Law School and author of How the European Union Rules the World, for a discussion and exploration of the digital trade war.
This event is free and open to the public, but registration is required.
Marietje Schaake is a non-resident Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI. She is a columnist for the Financial Times and serves on a number of not-for-profit Boards as well as the UN's High Level Advisory Body on AI. Between 2009-2019 she served as a Member of European Parliament where she worked on trade-, foreign- and tech policy. She is the author of The Tech Coup.
Non-Resident Fellow, Cyber Policy Center
Fellow, Institute for Human-Centered Artificial Intelligence
Recent public outcries over facial recognition technology, police and state usage of automated surveillance tools, and racially motivated disinformation on social media have underscored the ways in which new digital technologies threaten to exacerbate existing racial and social cleavages. What is known about how digital technologies are contributing to racial tensions, what key questions remain unanswered, and what policy changes, by government or tech platforms, might help?
On Wednesday, September 23rd, from 10 a.m. - 11 a.m. Pacific Time, please join us for Race and Technology, with Kelly Born, Executive Director of the Stanford Cyber Policy Center, in conversation with Julie Owono, the Executive Director of Internet Sans Frontières, a digital rights advocacy organization based in France, an affiliate of the Berkman Klein Center for Internet and Society at Harvard and at Stanford’s Digital Civil Society Lab, and a member of Facebook’s Oversight Board; Mutale Nkonde, CEO of AI for the People, a member of the recently formed TikTok Content Advisory Council, and a fellow at Stanford’s Digital Civil Society Lab; and Safiya Noble, Associate Professor at UCLA in the Departments of Information Studies and African American Studies, and author of Algorithms of Oppression.
The event is open to the public, but registration is required.