FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.
Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions.
Please join the Cyber Policy Center for Towards Cyber Peace, Closing the Accountability Gap, hosted by Cyber Policy Center's Marietje Schaake, along with guests Stéphane Duguin, CEO of the Cyber Peace Institute and Camille François, CIO of Graphika and Mozilla Fellow. The discussion will focus on the challenges to cyber peace, and the work being done to chart a path forward. The session is open to the public, but registration is required.
Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. She was named President of the Cyber Peace Institute. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs and technology policies. Marietje is affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India and writes a monthly column for the Financial Times and a bi-monthly column for the Dutch NRC newspaper.
Camille François works on cyber conflict and digital rights online. She is the Chief Innovation Officer at Graphika, where she leads the company’s work to detect and mitigate disinformation, media manipulation and harassment. Camille was previously the Principal Researcher at Jigsaw, an innovation unit at Google that builds technology to address global security challenges and protect vulnerable users. Camille has advised governments and parliamentary committees on both sides of the Atlantic on policy issues related to cybersecurity and digital rights. She served as a special advisor to the Chief Technology Officer of France in the Prime Minister’s office, working on France’s first Open Government roadmap. Camille is a Mozilla Fellow, a Berkman-Klein Center affiliate, and a Fulbright scholar. She holds a masters degree in human rights from the French Institute of Political Sciences (Sciences-Po) and a masters degree in international security from the School of International and Public Affairs (SIPA) at Columbia University. François’ work has been featured in various publications, including the New York Times, WIRED, Washington Post, Bloomberg Businessweek, Globo and Le Monde.
Stéphane Duguin is the Chief Executive Officer of the CyberPeace Institute. His mission is to coordinate a collective response to decrease the frequency, impact, and scale of cyberattacks by sophisticated actors. Building on his hands-on experience in countering and analyzing cyber operations and information operations which impact civilians and civilian infrastructure, he leads the Institute with the aim of holding malicious actors to account for the harms they cause. Prior to this position, Stéphane Duguin was a senior manager and innovation coordinator at Europol. He led key operational projects to counter both cybercrime and online terrorism, such as the setup of the European Cybercrime Centre (EC3), the Europol Innovation Lab, and the European Internet Referral Unit (EU IRU). A leader in digital transformation, his work focused on the implementation of innovative responses to a large-scale abuse of the cyberspace, notably on the convergence of disruptive technologies and public-private partnerships.
From the Stanford Institute for Human-Centered AI (HAI) blog:
More than 25 governments around the world, including those of the United States and across the European Union, have adopted elaborate national strategies on artificial intelligence — how to spur research; how to target strategic sectors; how to make AI systems reliable and accountable.
Yet a new analysis finds that almost none of these declarations provide more than a polite nod to human rights, even though artificial intelligence has potentially big impacts on privacy, civil liberties, racial discrimination, and equal protection under the law.
That’s a mistake, says Eileen Donahoe, executive director of Stanford’s Global Digital Policy Incubator, which produced the report in conjunction with a leading international digital rights organization called Global Partners Digital.
Join Cyber Policy Center, June 17rd at 10am Pacific Time for Patterns and Potential Solutions to Disinformation Sharing, Under COVID-19 and Beyond, with Josh Tucker, David Lazer and Evelyn Douek.
The session will explore which types of readers are most susceptible to fake news, whether crowdsourced fact-checking by ordinary citizens works and whether it can reduce the prevalence of false news in the information ecosystem. Speakers will also look at patterns of (mis)information sharing regarding COVID-19: Who is sharing what type of information? How has this varied over time? How much misinformation is circulating, and among whom? Finally, we'll explore how social media platforms are responding to COVID disinformation, how that differs from responses to political disinformation, and what we think they could be doing better.
Evelyn Douek is a doctoral candidate and lecturer on law at Harvard Law School, and Affiliate at the Berkman Klein Center For Internet & Society. Her research focuses on online speech governance, and the various private, national and global proposals for regulating content moderation.
David Lazer is a professor of political science and computer and information science and the co-director of the NULab for Texts, Maps, and Networks. Before joining the Northeastern faculty in fall 2009, he was an associate professor of public policy at Harvard’s John F. Kennedy School of Government and director of its Program on Networked Governance.
Joshua Tucker is Professor of Politics, Director Jordan Center for the Advanced Study of Russia, Co-Director NYU Social Media and Political Participation (SMaPP) lab, Affiliated Professor of Russian and Slavic Studies and Affiliated Professor of Data Science.
The event is open to the public, but registration is required.
On Thursday, President Trump signed an executive order threatening to revoke CDA 230 protections, which would expose social media companies to increased liability for content that is posted on their sites. This comes on the heels of Twitter, last week, fact-checking two misleading tweets from the president about mail-in voting. Critics of the executive order say the White House is overstepping its authority, and cannot limit the legal protections that social media companies currently hold under federal law.
Join the Stanford Cyber Policy Center's team Monday June 1 at 8AM PST for President Trump’s Executive Order on Platforms and Online Speech: Stanford’s Cyber Policy Center Responds, with Nate Persily, Faculty Co-Director of the Cyber Policy Center and Director of the Program on Democracy and the Internet; Daphne Keller, Director for the Program on Platform Regulation and former associate general counsel for Google; Alex Stamos, Director of the Cyber Center’s Internet Observatory and former Chief Security Officer at Facebook; Marietje Schaake, Policy Director for the Cyber Policy Center and former Member of EU Parliament; and Eileen Donahoe, Executive Director of the Global Digital Policy Incubator and former US Ambassador to the UN Human Rights Counsel, in conversation with Cyber Center Director Kelly Born.
Pseudoscience and government conspiracy theories swirl on social media, though most of them stay largely confined to niche communities. In the case of COVID-19, however, a combination of anger at what some see as overly restrictive government policies, conflicting information about treatments and disease spread, and anxiety about the future has many people searching for facts...and finding misinformation. This dynamic creates an opportunity for determined people and skilled marketers to fill the void - to create content and produce messages designed to be shared widely. One recent example was a video called “Plandemic”, a slickly-produced video in which a purported whistleblower mixed health misinformation about COVID-19 into a broader conspiratorial tale of profiteering and cover-ups. Much like the disease it purports to explain, the video traveled rapidly across international boundaries in a matter of days.
Plandemic was one step in a larger process to raise the profile of its subject. SIO had begun to observe an increasing number of posts about Plandemic’s subject, Judy Mikovits, beginning on April 16. For two and a half weeks, we observed a series of cross-platform moments in which Judy Mikovits – a scientist whose work was retracted by journal editors – was recast as an expert whistleblower exposing a vast government cover-up. While it was the “Plandemic” video that propelled her to global notoriety, weeks of planned activity led to its rapid virality.
We analyzed 41,662 posts on Facebook, Instagram, YouTube, and Twitter starting April 15, when anti-vaccine and natural health Facebook pages began to promote Mikovits and her new book. While she had been an anti-vaccine conference speaker for years, social media dynamics suggested that Mikovits’s narratives were now being marketed for far larger mainstream audiences. While most of the early content related to Mikovits stayed within these echo chambers, a well-oiled PR machine propelled the discussion of her claims into larger communities like MAGA and QAnon, which eventually eclipsed the anti-vaccine and natural health communities in sheer volume of posts.
The "Plandemic" trailer was released on May 4th. We gathered data until May 17; by that point, news media and fact-checking organizations had addressed the misinformation for U.S. audiences, but it had begun to spread internationally. In this post, we discuss not only the community dynamics of the spread, but some dynamics of the debunking, offering a look at the lifecycle of a coordinated campaign to create a counter-authority figure and drive a political and economic agenda.
Key Takeaways:
Content related to Mikovits appeared with increasing frequency within the natural-health and anti-vaccine communities beginning April 15, 2020. Much of it took the form of YouTube videos shared to Facebook. These early videos got some traction in QAnon as well as broadly-conspiratorial communities, but largely stayed within these echo chambers until the "Plandemic" video was posted by its creator on May 4.
After the "Plandemic" video appeared, content related to Mikovits began to spread widely outside of the original collection of communities: over 5000 posts mentioning her appeared across 1681 pro-Trump/conservative Groups and Pages, more than 800 posts showed up across 125 “Reopen”-oriented groups, and 875 posts appeared across 494 generic local chat community Facebook Groups. There were additionally over 3700 niche interest groups with posts mentioning Mikovits that did not fit any of our community classifications.
Of the 41,662 posts mentioning Mikovits on Facebook and Instagram, the majority had very low engagement; 37,120 (89%) had fewer than 50 engagements, and 11,548 posts (28%) had zero interactions.
Some of the highest-engagement posts supportive of Mikovits came from verified “blue-check” influencers, including Robert F. Kennedy Jr, Rashid Buttar, and some Italian and Spanish-language influencers.
11 of the top 30 high-engagement posts overall appeared on Instagram, despite only 503 Instagram posts in the data set (1.2%).
Our research also looked at the spread of debunking content. The "Plandemic" video went viral between May 5-7; most debunking pieces appeared between May 6-12. Some were created by news organizations (ie, Buzzfeed and the New York Times), some by social-media-savvy doctors (ZDoggMD, Doctor Mike), and some were direct posts on science communicator pages (SciBabe). We found that 10 debunking posts on Facebook and Instagram made it into the top-25 highest-engagement posts mentioning Mikovits overall. However, closer examination of the engagement on some of these posts suggests that many top comments were from people angrily challenging the fact-check.
As the debunking content began to appear and spread within the United States, mentions of Mikovits and shares of the "Plandemic" video were gaining momentum in international communities. We noted rising activity in Italian, Portuguese, Romanian, Vietnamese, Norwegian, Dutch, French, and German. This slight time offset may indicate an opportunity to minimize the global reach of misinformation in the future through rapidly-translated debunking articles.
Manufacturing an Influencer
The "Plandemic" video was not the first time Judy Mikovits was cast as a whistleblowing hero. Three weeks before it appeared, there was an increase in social media posts mentioning Mikovits – and her new book – within anti-vaccine communities and Twitter hashtags related to Anthony Fauci. Her social media commentary, and book promotion, focused on two themes: first, pseudoscience about vaccines; second, allegations of government cover-ups and intrigue. Mikovits had claimed for years that a wide range of diseases were caused by contaminated vaccines; that trope was adjusted to fit public interest in the coronavirus pandemic via claims that COVID-19 was an engineered virus, and that it was tied to the flu shot. To appeal to the rising Reopen movement, which sees Fauci as an impediment, Mikovits claimed that he and other powerful forces had silenced her, targeting her as part of a vast government cover-up to conceal her research findings. None of this is true.
Mikovits (or her team) had previously run a Twitter account for her speaking and writing career on the anti-vaccine conference circuit; it amassed roughly 1700 followers in three years. Following the release of the new book, on April 18th a new Twitter account appeared: @drjudyamikovits. Within 24 hours it tweeted only once yet amassed nearly 18,000 followers. Rapid follower growth continued for days, sometimes by thousands overnight despite no new tweets or prominent mentions. An analysis of the new account’s followers turned up thousands of accounts created within the prior two weeks, as well as follower clusters of older accounts created within close time proximity that had no tweets or profile pictures.
The first tweet from @drjudyamikovits, since deleted, announced that Zach Vorhies was helping with her social media presence. Vorhies, a self-styled whistleblower involved with Project Veritas (a group known for creating misleading ideological attack videos) had posted his own tweets corroborating that he was running Mikovits’s PR. His motivation appeared tied to Fauci: “Help Me Take Down Fauci”, he wrote in a tweet appealing for donations to get Mikovits’s message out and outlined the marketing plan on his GoFundMe page:
The remarkable growth on Twitter, and increasing presence of Mikovits on YouTube channels that had featured Vorhies, line up with this strategy. Journalists have since covered additionalfacets of Vorhies’s involvement, and University of Washington professor Kate Starbird wrote about the Twitter dynamics of Mikovits and "Plandemic."
YouTube Roadshow
Mikovits was known within anti-vaccine and alternative health communities, but largely unknown to the broader public as her book promotion got underway. YouTube creators with large numbers of subscribers provided a way to put her message in front of large, new audiences. These included alternative medicine influencers like Rashid Buttar, right-wing media such as Next News Network, and generic interview channels such as Valuetainment. Facebook data shows that at least 77% of pre-"Plandemic" mentions of ‘Mikovits’ (9269 posts) included video content, and 43% of the posts had a YouTube link; Buzzsumo YouTube data showed that the top ten most-watched videos in the early days of the media campaign amassed 2.9 million views. Many of these YouTube videos were taken down because they contained health misinformation, although this was done after they achieved high view counts and social reach (for example, Next News Network’s video had at least 765,000 views, and 242,600 engagements across Facebook and Twitter before coming down). Several of the takedowns led to secondary videos – and secondary waves of attention – in which channel creators complained about YouTube censorship.
Although the early interviews got some reach, high-engagement posts about Mikovits largely stayed within anti-vaccine and alternative-health echo chambers, with some occasional mentions within MAGA and QAnon communities (largely tied to Gateway Pundit coverage).
Then, "Plandemic" appeared…
We inductively created dictionaries of words associated with particular communities, and used these dictionaries to classify the 16,464 Pages, Groups, and Instagram accounts that created the 41,662 posts mentioning Mikovits into 27 distinct communities. The goal was to better understand sharing dynamics at an aggregate community level, vs the dynamics of individual pages. Data obtained via CrowdTangle.
Plandemic: misinformation goes viral
"Plandemic" was posted to Facebook on the afternoon of May 4th by its creator, Mikki Willis. His post was unique in that it did not highlight the content of the video, but instead emphasized that the video would imminently be censored. Stealing a line from the civil rights movement (and poet June Jordan) – ”We are the ones we've been waiting for” – he implored his readers to make copies of the video and share it.
"Plandemic" was a pivotal step in framing Mikovits as a whistleblower icon; 40% of the 41,662 posts in the data set included a link to plandemicmovie.com or a mention of Plandemic. The video was rife with misinformation, but it propelled awareness of Mikovits to a broad range of new audiences. Unlike prior videos, it offered polished editing and mainstream-friendly presentation of both Mikovits’s health conspiracies and her vast-government-coverup story. Communities mentioning her expanded to include hundreds of Groups and Pages representing local communities (“New Brunswick Community Bulletin Board”), religious communities (“Love Of Jesus Fellowship”), and a small handful of liberal and left groups (“The Struggle for Equality”). The network graph below shows the significant increase in mentions and the emergence of new communities.
The “Miscellaneous” community includes 3,700 Groups and Pages that didn’t fit neatly into any of the other communities - “Fashion Style”, “Martin Guitars,” etc. This difficulty of classification is itself interesting, because it suggests that "Plandemic" captured the attention of a broad range of audiences. The figure below offers another view of mentions and interactions of Mikovits in the few days immediately surrounding "Plandemic." Interestingly, despite the significant spike in some community shares, such as MAGA, there was no significant rise in engagement. Many of the posts had fewer than 20 interactions.
Post-Plandemic: International Growth and Debunking Narratives
On May 7th, 2020, as "Plandemic"’s virality began to diminish among U.S. audiences, we observed two new dynamics. First, a ‘second wave’ rise in posting activity about Mikovits in a variety of non-English languages, including Italian, Polish, Russian, and others (below). While we’d noted the early presence of activity in Spanish-language conspiracy communities (a finding observed by other researchers), the number of non-English posts increased. These posts included links not only to "Plandemic" itself, but to other non-English videos discussing the claims.
This graph shows the emergence of posts in numerous languages mentioning Mikovits after the May 4th release of "Plandemic." The video went viral on May 5-6th, while non-English posts increased in volume several days later. (The early Spanish-language posts were part of a cluster of Spanish-language conspiratorial pages.) Click to view in separate window.
The second dynamic that picked up around May 7th, 2020 was the appearance of mainstream media, medical and scientific influencers, and fact-checking organizations. We tracked 65 links featuring debunking content created between May 6th and May 12th. These links appeared in 1,132 posts in our 41,662-post data set; most (714) were links to articles; 366 were shares of video or YouTube content.
Debunking is inherently reactive, but the two-day gap between viral misinformation and correction illustrates how the lie goes halfway around the world, as the saying goes, before the truth gets its pants on. The earliest debunking content appeared in English as the misinformation was continuing around the world, translated into other languages. The bar chart below shows the number of posts of 65 debunking URLs appearing within various communities (almost all of those posts were in English).
Understanding the complex dynamics around sharing corrections is important to understanding how to address viral misinformation. In this preliminary work, we noted that 6 of these posts ranked among the 25 highest-engagement posts that mentioned ‘Mikovits’ overall. However, engagement stats alone don’t always tell the whole story. For example, some of the engagement comes from the debunker’s own fan base; YouTube influencers who post funny or creative videos get a lot of encouragement from their subscribers, which, while heartening to see, doesn’t mean that their debunking reached fence-sitters or changed minds of those who believed the misinformation.
We looked at some of the posts of debunking links that appeared in more conspiratorial or politically polarized groups; examples appear below. While some posters appeared receptive to correction, others posted the links to debunk the debunking, mock the media’s attempts to ‘silence the truth’, or suggest fellow Group members go downvote the correction videos.
Video platforms can reduce viral spread of new videos via recommendation and search engines while videos are evaluated by fact-checkers. This could be done in an automated fashion on videos that mention high-manipulation-risk topics (like COVID) and would give fact-checkers and policy teams more time to make considered decisions.
Video platforms should consider leaving up the original postings of controversial videos, annotated with the appropriate fact-check, and downranking or eliminating re-uploads. This creates one location that can be used to offer debunking content, both as a pre-roll on the video and in the recommended next videos. Eliminating the video completely is almost impossible, and doing so ensures that the original disinformation goes viral while debunking content is not distributed. Takedowns also have the secondary effect of leading to allegations of censorship that draw attention and even elevate the reputation of the creator or subject within some communities.
Conclusion
The campaign to recast Judy Mikovits as a whistleblower offers a case study in the type of factional network dynamics and cross-platform content spread that will likely happen repeatedly over the coming months, around COVID-19 as well as the 2020 election. Although the activity involved some fake Twitter accounts, there was nothing that crossed the line into coordinated inauthentic behavior -- this was a marketing campaign that pulled ordinary people into the sharing process. However, it was also a marketing campaign that made blatantly false claims and increased confusion and skepticism around vaccines, health authorities, and institutional responses to the pandemic. Platforms have rightly committed to mitigating health misinformation; this example makes clear the need to develop better solutions that avoid after-the-fact content takedowns that turn manipulative charlatans into free-expression martyrs. Further study of cross-platform, cross-faction sharing dynamics around debunking content in particular would help inform fact-checking efforts, and help platforms gauge how to respond to highly-misleading viral videos.
The authors would like to thank Jen Brea, Ben Decker and Kate Starbird for early discussions and feedback on this work and the whole Stanford Internet Observatory team for their valuable editorial input and research support.
About the Event: The United States is steadily losing ground in the race against China to pioneer the most important technologies of the 21st century. With technology a critical determinant of future military advantage, a key driver of economic prosperity, and a potent tool for the promotion of different models of governance, the stakes could not be higher. To compete, China is leveraging its formidable scale—whether measured in terms of research and development expenditures, data sets, scientists and engineers, venture capital, or the reach of its leading technology companies. The only way for the United States to tip the scale back in its favor is to deepen cooperation with allies. The global diffusion of innovation also places a premium on aligning U.S. and ally efforts to protect technology. Unless coordinated with allies, tougher U.S. investment screening and export control policies will feature major seams that Beijing can exploit.
On early June, join Stanford's Center for International Security and Cooperation (CISAC) and the Center for a New American Security (CNAS) for a unique virtual event that will feature three policy experts advancing concrete ideas for how the United States can enhance cooperation with allies around technology innovation and protection.
This webinar will be on-the-record, and include time for audience Q&A.
About the Speakers:
Anja Manuel, Stanford Research Affiliate, CNAS Adjunct Senior Fellow, Partner at Rice, Hadley, Gates & Manuel LLC, and author with Pav Singh of Compete, Contest and Collaborate: How to Win the Technology Race with China.
Daniel Kliman, Senior Fellow and Director, CNAS Asia-Pacific Security Program, and co-author of a recent report, Forging an Alliance Innovation Base.
Martijn Rasser, Senior Fellow, CNAS Technology and National Security Program, and lead researcher on the Technology Alliance Project
COVID-19 is having a profound impact on our online systems - exposing both the essential role they can and do play in our modern society, and the risks and vulnerabilities they represent. Substantial research is emerging on this topic, and the implications of that research will have important consequences for both medium (e.g., 2020 elections) and long-term cyber policies.
Welcome Remarks: Mike McFaul
Introduction to CPC and the center’s work on COVID from moderator Kelly Born
Alex Stamos of the Internet Observatory will discuss their work examining shifting narratives about coronavirus from Chinese and Russian State Media, early insights into covid misinformation in other countries (e.g., Nigeria), and how tech companies are responding; as well as how Zoom and other platforms have been working to adapt policies and practices to meet growing demands, and risks.
Nate Persily at PDI will discuss the challenges of running the elections in the current environment, including implications for state necessary changes to state policies and practices.
Eileen Donahoe at GDPI will discuss geopolitical threats to the international human rights law framework due to ineffective response to COVID by democratic govs; cite risks to 5 specific substantive civil/political rights; and recommend that democratic govs apply international human rights process principles in COVID-19 context. [I will use ~2 slides -only if others use slides]
Marietje Schaake will discuss human-rights challenges (to privacy, freedom of association, and freedom of expression) that have arisen with various applications of AI in COVID-19 context - e.g.,contact tracking and content moderation; as well as emerging criteria for policymakers to consider when deploying tracing and related technologies.
A Research Agenda for Cyber Risk and Cyber Insurance | June 2019
Lead Author: Gregory Falco, Program on Geopolitics, Technology, and Governance at the Cyber Policy Center
Presented at the the 2019 Workshop on the Economics of Information Security (Boston, June 3-4, 2019)
Cyber risk as a research topic has attracted considerable academic, industry and government attention over the past 15 years. Unfortunately, research progress has been modest and has not been sufficient to answer the “call to action” in many prestigious committee and agency reports. To date, industry and academic research on cyber risk in all its complexity has been piecemeal and uncoordinated – which is typical of emergent, pre-paradigmatic fields. Further complicating matters is the multidisciplinary characteristics of cyber risk. In order to significantly advance the pace of research progress, a group of scholars, industry practitioners and policymakers from around the world present a research agenda for cyber risk and cyber insurance, which accounts for the variety of fields relevant to the problem space. We propose a cyber risk unified concept model that identifies where certain disciplines of study can add value. The concept model can also be used to identify collaboration opportunities across the major research questions. In this agenda, we unpack the major research questions into manageable projects and tactical questions that need to be addressed.
On May 20th please join us for Perspectives on Science Communication, Misinformation, and the COVID-19 Infodemic, featuring University of Washington scholars Kate Starbird, Jevin West and Ryan Calo, in conversation with Cyber Policy Center Director Kelly Born, as they discuss a new project exploring how scientific findings and science credentials are mobilized in the spread of misinformation.
Kate Starbird and Jevin West will present emerging research into how scientific findings and science credentials are mobilized within the spread of false and misleading information about COVID-19. Ryan Calo will explore proposals to address COVID-19 through information technology—the subject of a recent Senate Commerce hearing at which he testified—with particular attention to the ways contact tracing apps could prove a vector for misinformation and disinformation.
Kate Starbird is an Associate Professor in the Department of Human Centered Design & Engineering (HCDE) and Director of the Emerging Capacities of Mass Participation (emCOMP) Laboratory. She is also adjunct faculty in the Paul G. Allen School of Computer Science & Engineering and the Information School and a data science fellow at the eScience Institute.
Kate's research is situated within human-computer interaction (HCI) and the emerging field of crisis informatics — the study of how information-communication technologies (ICTs) are used during crisis events. Her research examines how people use social media to seek, share, and make sense of information after natural disasters (such as earthquakes and hurricanes) and man-made disasters (such as acts of terrorism and mass shooting events). More recently, her work has shifted to focus on the spread of disinformation in this context.
Ryan Calo is the Lane Powell and D. Wayne Gittinger Associate Professor at the University of Washington School of Law. In addition to co-founding the UW Center for an Informed Public, he is a faculty co-director (with Batya Friedman and Tadayoshi Kohno) of the UW Tech Policy Lab---a unique, interdisciplinary research unit that spans the School of Law, Information School, and Paul G. Allen School of Computer Science and Engineering where Calo also holds courtesy appointments. Calo is widely published in the area of law and emerging technology.
Jevin West is an Associate Professor in the Information School at the University of Washington. He is the co-founder of the DataLab and the new Center for an Informed Public at UW. He holds an Adjunct Faculty position in the Paul G. Allen School of Computer Science & Engineering and Data Science Fellow at the eScience Institute. His research and teaching focus on misinformation in and about science. He develops methods for mining the scientific literature in order to study the origins of disciplines, the social and economic biases that drive these disciplines, and the impact the current publication system has on the health of science.
Kelly Born is the Executive Director of Stanford’s Cyber Policy Center. The center’s research and teaching focuses on the governance of digital technology at the intersection of security, geopolitics and democracy. Born collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, and outreach to key decision-makers globally. Prior to joining Stanford, Born helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy. There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University.
A recording of this event can be found here (YouTube recording)
National AI Strategies and Human Rights: New Urgency in the Era of COVID-19, takes place Wednesday, May 6th, at 10am PST with Eileen Donahoe, the Executive Director of the Global Digital Policy Incubator (GDPi) at Stanford's Cyber Policy Center and Megan Metzger, Associate Director for Research, also at GDPi. Joining them will be Mark Latonero, Senior Researcher at Data & Society, Richard Wingfield, from Global Partners Digital, and Gallit Dobner, Director of the Centre for International Digital Policy at Global Affairs Canada. The session will be moderated by Kelly Born, Executive Director of the Cyber Policy Center.
The seminar will focus on the recently published report,National Artificial Intelligence Strategies and Human Rights: A Review, produced by the Global Digital Policy Incubator at Stanford and Global Partners Digital - and will also provide an opportunity to look at how the COVID-19 crisis is impacting human rights and digital technology work more generally.
We will also be jointly hosting a webinar with the Freeman Spogli Institute for International Studies on May 8th at 1pm PST, with experts from around the center and institute discussing emerging research on Covid-19, and the implications to future cyber policies, as well as the upcoming elections. More information on the May 8th event, can be found here.
Eileen Donahoe is the Executive Director of the Global Digital Policy Incubator (GDPI) at Stanford University, FSI/Cyber Policy Center. GDPI is a global multi-stakeholder collaboration hub for development of policies that reinforce human rights and democratic values in digitized society. Areas of current research: AI & human rights; combatting digital disinformation; governance of digital platforms. She served in the Obama administration as the first US Ambassador to the UN Human Rights Council in Geneva, at a time of significant institutional reform and innovation. After leaving government, she joined Human Rights Watch as Director of Global Affairs where she represented the organization worldwide on human rights foreign policy, with special emphasis on digital rights, cybersecurity and internet governance. Earlier in her career, she was a technology litigator at Fenwick & West in Silicon Valley. Eileen serves on the National Endowment for Democracy Board of Directors; the Transatlantic Commission on Election Integrity; the World Economic Forum Future Council on the Digital Economy; University of Essex Advisory Board on Human Rights, Big Data and Technology; NDI Designing for Democracy Advisory Board; Freedom Online Coalition Advisory Network; and Dartmouth College Board of Trustees. Degrees: BA, Dartmouth; J.D., Stanford Law School; MA East Asian Studies, Stanford; M.T.S., Harvard; and Ph.D., Ethics & Social Theory, GTU Cooperative Program with UC Berkeley. She is a member of the Council on Foreign Relations.
Megan Metzger is a Research Scholar and Associate Director for Research at the Global Digital Policy Incubator (GDPi) Program. Megan’s research is focused on how changes in technology change how individuals and states use and have access to information, and how this affects protest and other forms of political behavior. Her dissertation was focused primarily on the role of social media during the EuroMaidan protests in Ukraine. She has also worked on projects about the Gezi Park protests in Turkey, and has ongoing projects exploring Russian state strategies of information online. In addition to her academic background, Megan has spent a number of years studying and working in the post-communist world. Her scholarly work has been published in The Journal of Comparative Economics and Slavic Review. Her analysis has also been published in the Monkey Cage Blog at The Washington Post, The Huffington Post and Al Jazeera English.
Mark Latonero is a Senior Researcher at Data & Society focused on AI and human rights and a Fellow at Harvard Kennedy School’s Carr Center for Human Rights Policy. Previously he was a research director and research professor at USC where he led the Technology and Human Trafficking Initiative. He has also served as innovation consultant for the UN Office of the High Commissioner for Human Rights. Dr. Latonero works on the social and policy implications of emerging technology and examines the benefits, risks, and harms of digital technologies, particularly in human rights and humanitarian contexts. He has published a number of reports on the impact of data-centric and automated technologies in forced migration, refugee identity, and crisis response.
Richard Wingfield provides legal and policy expertise across Global Partners Digital's portfolio of programs. As Head of Legal, he provides legal and policy advice internally at GPD and to its partner organizations on human rights as they relate to the internet and digital policy, and develops legal analyses, policy briefings and other resources for stakeholders. Before joining GPD, Richard led on policy development and advocacy at the Equal Rights Trust, an international human rights organization working to combat discrimination and inequality. He has also undertaken research for the Bar Human Rights Committee and Commonwealth Lawyers Association, the Netherlands Institute of Human Rights and provided support during the preparatory work for the Yogyakarta Principles. Gallit Dobner is Director of the Centre for International Digital Policy at Global Affairs Canada, with responsibility for the G7 Rapid Response Mechanism to counter foreign threats to democracy as well as broader issues at the intersection of foreign policy and technology. She formerly served as Political Counsellor in The Hague, where she was responsible for bilateral relations and the international courts and tribunals (2015-19), and in Algiers (2010-12). Gallit has also served as Deputy Director at Global Affairs Canada for various international security files, including Counter Terrorism, the Middle East, and Afghanistan. Prior to this, Gallit was a Middle East analyst at Canada’s Privy Council Office. Gallit has a Masters in Political Science from McGill University and Sciences PO.
Kelly Born is the Executive Director of Stanford’s Cyber Policy Center. The center’s research and teaching focuses on the governance of digital technology at the intersection of security, geopolitics and democracy. Born collaborates with the center’s program leaders to pioneer new lines of research, policy-oriented curriculum, and outreach to key decision-makers globally. Prior to joining Stanford, Born helped to launch and lead The Madison Initiative at the William and Flora Hewlett Foundation, one of the largest philanthropic undertakings working to reduce polarization and improve U.S. democracy. There, she designed and implemented strategies focused on money in politics, electoral reform, civic engagement and digital disinformation. Kelly earned a master’s degree in international policy from Stanford University.