Environment
Authors
Charles Mok
Kenny Huang
Kenny Huang
News Type
News
Date
Paragraphs

In new work, Global Digital Policy Incubator (GDPi) Research Scholar, Charles Mok, along with Kenny Huang, a leader in Asia’s internet communities, examine Taiwan’s reliance on fragile external systems and how that reliance exposes Taiwan to threats like geopolitical conflicts, cyberattacks and natural disasters. The key, write Mok and Huang, is strengthening governance, enhancing investment, and fostering international cooperation in order to secure a resilient future.

For more, read the full paper, out now and free to download.

Read More

collage of images at the cyber policy center, people testifying, people at events
News

Agenda for the Trust & Safety Research Conference is now Live!

Speaker line-up for the third annual Trust & Safety Research Conference announced.
Agenda for the Trust & Safety Research Conference is now Live!
Skyline of Taipei at dawn.
Blogs

Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency

Through the Policy Change Studio, students partner with international organizations to propose policy-driven solutions to new digital challenges.
Masters’ in International Policy students publish capstone reports on Taiwan’s cybersecurity and online resiliency
All News button
1
Subtitle

A new paper from Charles Mok of GDPi examines the current landscape of Taiwan’s Internet Infrastructure

Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

The agenda for the 2024 Trust & Safety Research Conference is now available. The conference includes two packed days of lightning talks, research presentations, panels, workshops and a poster session. The conference has an amazing lineup of speakers, including keynote speakers Camille François (Associate Professor of the Practice of International and Public Affairs, Columbia University) and Arvind Narayanan (Professor of Computer Science and Director of the Center for Information Technology Policy, Princeton University.)

The Trust & Safety Research Conference convenes a diverse group of academics, researchers, and practitioners from fields including computer science, sociology, law, and political science. It features networking opportunities including happy hours, and complimentary breakfast and lunch are provided on both days.

Register now and save a spot before early bird pricing ends on August 1.

More details on the conference website

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Speaker line-up for the third annual Trust & Safety Research Conference announced.

Date Label
Authors
Stanford Internet Observatory
News Type
News
Date
Paragraphs

Registration is now open for the third annual Trust & Safety Research Conference at Stanford University from September 26-27, 2024. Join us for two days of cross-professional presentations and conversations designed to push forward research on trust and safety.

Hosted at Stanford University’s Frances. C. Arrillaga Alumni Center, the Trust & Safety Research Conference convenes participants working on trust and safety issues across academia, industry, civil society, and government. The event brings together a cross-disciplinary group of academics and researchers in fields including computer science, sociology, law, and political science to connect with practitioners and policymakers on challenges and new ideas for studying and addressing online trust and safety issues.

Your ticket provides access to:

  • Two days of talks, panels, workshops and breakouts
  • Breakfast and lunch both days of the conference
  • Networking opportunities, including happy hours and poster sessions

Early bird tickets are $150 for attendees from academia, civil society and government, and $600 for attendees from industry. Ticket prices go up August 1, 2024.

CONFERENCE WEBSITE • REGISTER

Read More

six people sit in white chairs on a raised stage in front of a turquoise and magenta background
News

3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024

Presentation proposals and abstracts due April 30, 2024
3rd Annual Trust & Safety Research Conference announced for September 26-27, 2024
trust and safety teaching consortium text on blue and white abstract background
Blogs

Stanford Internet Observatory launches the Trust and Safety Teaching Consortium

A new teaching consortium will share open access teaching material for developing classes on online trust and safety.
Stanford Internet Observatory launches the Trust and Safety Teaching Consortium
the cover of the first issue of the journal of online trust and safety featuring a stylized picture of fiber optic cables and the logo of the journal
News

The first issue of the Journal of Online Trust and Safety

The journal of Online Trust and Safety published its inaugural issue on Thursday, October 28.
The first issue of the Journal of Online Trust and Safety
All News button
1
Subtitle

Tickets on sale for the third annual Trust & Safety Research Conference to be held September 26-27, 2024. Lock in early bird prices by registering before August 1.

Authors
Clifton B. Parker
News Type
News
Date
Paragraphs

While the potential benefits of artificial intelligence are significant and far-reaching, AI’s potential dangers to the global order necessitates an astute governance and policy-making approach, panelists said at the Freeman Spogli Institute for International Studies (FSI) on May 23.

An alumni event at the Ford Dorsey Master’s in International Policy (MIP) program featured a panel discussion on “The Impact of AI on the Global Order.” Participants included Anja Manuel, Jared Dunnmon, David Lobell, and Nathaniel Persily. The moderator was Francis Fukuyama, Olivier Nomellini senior fellow at FSI and director of the master’s program.

Manuel, an affiliate at FSI’s Center for International Security and Cooperation and executive director of the Aspen Strategy Group, said that what “artificial intelligence is starting to already do is it creates superpowers in the way it intersects with other technologies.”

An alumna of the MIP program, Manuel noted an experiment a year ago in Switzerland where researchers asked an AI tool to come up with new nerve agents – and it did very rapidly, 40,000 of them. On the subject of strategic nuclear deterrence, AI capabilities may upend existing policy approaches. Though about 30 countries have voluntarily signed up to follow governance standards in how AI would be used in military conflicts, the future is unclear.

“I worry a lot,” said Manuel, noting that AI-controlled fighter jets will likely be more effective than human-piloted craft. “There is a huge incentive to escalate and to let the AI do more and more and more of the fighting, and I think the U.S. government is thinking it through very carefully.”
 


AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.
Nathaniel Persily
Co-director of the Cyber Policy Center


Geopolitical Competition


Dunnmon, a CISAC affiliate and senior advisor to the director of the Defense Innovation Unit, spoke about the “holistic geopolitical competition” among world powers in the AI realm as these systems offer “unprecedented speed and unprecedented scale.”

“Within that security lens, there’s actually competition across the entirety of the technical AI stack,” he said.

Dunnmon said an underlying security question involves whether a given AI software is running on top of libraries that are sourced from Western companies then if software is being built on top of an underlying library stack owned by state enterprises. “That’s a different world.”

He said that “countries are competing for data, and it’s becoming a battlefield of geopolitical competition.”

Societal, Environmental Implications


Lobell, a senior fellow at FSI and the director of the Center for Food Security and the Environment, said his biggest concern is about how AI might change the functioning of societies as well as possible bioterrorism.

“Any environment issue is basically a collective action problem, and you need well-functioning societies with good governance and political institutions, and if that crumbles, I don’t think we have much hope.”

On the positive aspects of AI, he said the combination of AI and synthetic biology and gene editing are starting to produce much faster production cycles of agricultural products, new breeds of animals, and novel foods. One company found how to make a good substitute for milk if pineapple, cabbage and other ingredients are used.

Lobell said that AI can understand which ships are actually illegally capturing seafood, and then they can trace that back to where they eventually offload such cargo. In addition, AI can help create deforestation-free supply chains, and AI mounted on farm tractors can help reduce 90% of the chemicals being used that pose environmental risks.

“There’s clear tangible progress being made with these technologies in the realm of the environment, and we can continue to build on that,” he added.
 


Countries are competing for data, and it’s becoming a battlefield of geopolitical competition.
Jared Dunnmon
Affiiate at the Center for International Security and Cooperation (CISAC)


AI and Democracy


Persily, a senior fellow and co-director of FSI’s Cyber Policy Center, said, “AI amplifies the abilities of all good and bad actors in the system to achieve all the same goals they’ve always had.”

He noted, “AI is not social media,” even though it can interact with social media. Persily said AI is so much more pervasive and significant than a given platform such as Facebook. Problems arise in the areas of privacy, antitrust, bias and disinformation, but AI issues are “characteristically different” than social media.

“One of the ways that AI is different than social media is the fact that they are open-source tools. We need to think about this in a little bit of a different way, which is that it is not just a few companies that can be regulated on closed systems,” Persily said.

As a result, AI tools are available to all of us, he said. “There is the possibility that some of the benefits of AI could be realized more globally,” but there are also risks. For example, in the year and a half since OpenAI released ChatGPT, which is open sourced, child pornography has multiplied on the Internet.

“The democratization of AI will lead to fundamental challenges to establish legacy infrastructure for the governance of the propagation of content,” Persily said.

Balance of AI Power


Fukuyama pointed out that an AI lab at Stanford could not afford leading-edge technology, yet countries such as the U.S. and China have deeper resources to fund AI endeavors.

“This is something obviously that people are worried about,” he said, “whether these two countries are going to dominate the AI race and the AI world and disadvantage everybody.”

Manuel said that most of AI is now operating with voluntary governance – “patchwork” – and that dangerous things involving AI can be done now. “In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.” 

Lobell said that while it might seem universities can’t stay up to speed with industry, people have shown they can reproduce those models’ performances just days after their releases.
 


In the end, we’re going to have to adopt a negotiation and an arms control approach to the national security side of this.
Anja Manuel
Affiiate at the Center for International Security and Cooperation (CISAC)


On regulation — the European Union is currently weighing legislation — Persily said it would be difficult to enforce regulations and interpret risk assessments, so what is needed is a “transparency regime” and an infrastructure so civil entities have a clear view on what models are being released – yet this will be complex.

“I don’t think we even really understand what a sophisticated, full-on AI audit of these systems would look like,” he said.

Dunnmon suggested that an AI governance entity could be created that’s similar to how the U.S. Food and Drug Agency reviews pharmaceuticals before release.

In terms of AI and military conflicts, he spoke about the need for AI and humans to understand the rewards and risks involved, and in the case of the latter, how the risk compares to the “next best option.”

“How do you communicate that risk, how do you assess that risk, and how do you make sure the right person with the right equities and the right understanding of those risks is making that risk trade-off decision?” he asked.



The Ford Dorsey Master’s in International Policy program was established in 1982 to provide students with the knowledge and skills necessary to analyze and address complex global challenges in a rapidly changing world, and to prepare the next generation of leaders for public and private sector careers in international policymaking and implementation.

Read More

AI
News

Research can help to tackle AI-generated disinformation

New work in Nature Human Behaviour from SIO researchers, with other co-authors looks at how generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust.
Research can help to tackle AI-generated disinformation
The Right Honorable Jacinda Ardern and a delegation from the Christchirch Call joined Stanford researchers at the Freeman Spogli Institute for International Studies for a roundtable discussion on technology governance and regulation.
News

Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation

Led by former Prime Minister of New Zealand Rt. Hon. Dame Jacinda Ardern, a delegation from the Christchurch Call joined Stanford scholars to discuss how to address the challenges posed by emerging technologies.
Special Envoy Jacinda Ardern Assembles Stanford Scholars for Discussion on Technology Governance and Regulation
All News button
1
Subtitle

At a gathering for alumni, the Ford Dorsey Master's in International Policy program hosted four experts to discuss the ramifications of AI on global security, the environment, and political systems.

Terms
Date Label
-
Karen Nershi headshot on a blue background with Fall Seminar Series in white font

Join the Cyber Policy Center and moderator  Daniel Bateyko in conversation with Karen Nershi for How Strong Are International Standards in Practice?:  Evidence from Cryptocurrency Transactions. 

The rise of cryptocurrency (decentralized digital currency) presents challenges for state regulators given its connection to illegal activity and pseudonymous nature, which has allowed both individuals and businesses to circumvent national laws through regulatory arbitrage. Karen Nershi assess the degree to which states have managed to regulate cryptocurrency exchanges, providing a detailed study of international efforts to impose common regulatory standards for a new technology. To do so, she introduces a dataset of cryptocurrency transactions collected during a two-month period in 2020 from exchanges in countries around the world and employ bunching estimation to compare levels of unusual activity below a threshold at which exchanges must screen customers for money laundering risk. She finds that exchanges in some, but not all, countries show substantial unusual activity below the threshold; these findings suggest that while countries have made progress toward regulating cryptocurrency exchanges, gaps in enforcement across countries allow for regulatory arbitrage. 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Karen Nershi is a Postdoctoral Fellow at Stanford University's Stanford Internet Observatory and the Center for International Security and Cooperation (CISAC). In the summer of 2021, she completed her Ph.D. in political science at the University of Pennsylvania specializing in the fields of international relations and comparative politics. Through an empirical lens, her research examines questions of international cooperation and regulation within international political economy, including challenges emerging from the adoption of decentralized digital currency and other new technologies. 

Specific topics Dr. Nershi explores in her research include ransomware, cross-national regulation of the cryptocurrency sector, and international cooperation around anti-money laundering enforcement. Her research has been supported by the University of Pennsylvania GAPSA Provost Fellowship for Innovation and the Christopher H. Browne Center for International Politics. 

Before beginning her doctorate, Karen Nershi earned a B.A. in International Studies with honors at the University of Alabama. She lived and studied Arabic in Amman, Jordan and Meknes, Morocco as a Foreign Language and Area Studies Fellow and a Critical Language Scholarship recipient. She also lived and studied in Mannheim, Germany, in addition to interning at the U.S. Consulate General Frankfurt (Frankfurt, Germany).

Dan Bateyko is the Special Projects Manager at the Stanford Internet Observatory.

Dan worked previously as a Research Coordinator for The Center on Privacy & Technology at Georgetown Law, where he investigated Immigration and Customs Enforcement surveillance practices, co-authoring American Dragnet: Data-Drive Deportation in the 21st Century. He has worked at the Berkman Klein Center for Internet & Society, the Dangerous Speech Project, and as a research assistant for Amanda Levendowski, whom he assisted with legal scholarship on facial surveillance.

In 2016, he received a Thomas J. Watson Fellowship. He spent his fellowship year talking with people about digital surveillance and Internet infrastructure in South Korea, China, Malaysia, Germany, Ghana, Russia, and Iceland. His writing has appeared in Georgetown Tech Law Review, Columbia Journalism Review, Dazed Magazine, The Internet Health Report, Council on Foreign Relations' Net Politics, and Global Voices. He is a 2022 Internet Law & Policy Foundry Fellow.

Dan received his Masters of Law & Technology from Georgetown University Law Center (where he received the IAPP Westin Scholar Book Award for excellence in Privacy Law), and his B.A. from Middlebury College.

Karen Nershi
Seminars
-
robert robertson headshot fall seminar series text on blue background

Join the Program on Democracy and the Internet (PDI) and moderator Alex Stamos in conversation with Ronald E. Robertson for Engagement Outweighs Exposure to Partisan and Unreliable News within Google Search 

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

If popular online platforms systematically expose their users to partisan and unreliable news, they could potentially contribute to societal issues like rising political polarization. This concern is central to the echo chamber and filter bubble debates, which critique the roles that user choice and algorithmic curation play in guiding users to different online information sources. These roles can be measured in terms of exposure, the URLs seen while using an online platform, and engagement, the URLs selected while on that platform or browsing the web more generally. However, due to the challenges of obtaining ecologically valid exposure data--what real users saw during their regular platform use--studies in this vein often only examine engagement data, or estimate exposure via simulated behavior or inference. Despite their centrality to the contemporary information ecosystem, few such studies have focused on web search, and even fewer have examined both exposure and engagement on any platform. To address these gaps, we conducted a two-wave study pairing surveys with ecologically valid measures of exposure and engagement on Google Search during the 2018 and 2020 US elections. We found that participants' partisan identification had a small and inconsistent relationship with the amount of partisan and unreliable news they were exposed to on Google Search, a more consistent relationship with the search results they chose to follow, and the most consistent relationship with their overall engagement. That is, compared to the news sources our participants were exposed to on Google Search, we found more identity-congruent and unreliable news sources in their engagement choices, both within Google Search and overall. These results suggest that exposure and engagement with partisan or unreliable news on Google Search are not primarily driven by algorithmic curation, but by users' own choices.

Dr. Ronald E Robertson received his Ph.D. in Network Science from Northeastern University in 2021. He was advised by Christo Wilson, a computer scientist, and David Lazer, a political scientist. For his research, Dr. Robertson uses computational tools, behavioral experiments, and qualitative user studies to measure user activity, algorithmic personalization, and choice architecture in online platforms. By rooting his questions in findings and frameworks from the social, behavioral, and network sciences, his goal is to foster a deeper and more widespread understanding of how humans and algorithms interact in digital spaces. Prior to Northeastern, Dr. Robertson obtained a BA in Psychology from the University of California San Diego and worked with research psychologist Robert Epstein at the American Institute for Behavioral Research and Technology.

Alex Stamos
0
ronald-e-robertson-2024.jpg
PhD

Dr. Ronald E Robertson received his Ph.D. in Network Science from Northeastern University in 2021. He was advised by Christo Wilson, a computer scientist, and David Lazer, a political scientist. For his research, Dr. Robertson uses computational tools, behavioral experiments, and qualitative user studies to measure user activity, algorithmic personalization, and choice architecture in online platforms. By rooting his questions in findings and frameworks from the social, behavioral, and network sciences, his goal is to foster a deeper and more widespread understanding of how humans and algorithms interact in digital spaces.

Prior to Northeastern, Dr. Robertson obtained a BA in Psychology from the University of California San Diego and worked with research psychologist Robert Epstein at the American Institute for Behavioral Research and Technology.

Research Scientist, Cyber Policy Center
Date Label
Seminars
-
l jean camp headshot on blue background

Join the Program on Democracy and the Internet (PDI) and moderator Andrew Grotto, in conversation with L. Jean Camp for Create a Market for Safe, Secure Software

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

Today the security market, particularly in embedded software and Internet of Things (IoT) devices, is a lemons market.  Buyers simply cannot distinguish between secure and insecure products. To enable the market for secure high quality products to thrive,  buyers need to have some knowledge of the contents of these digital products. Once purchased, ensuring a product or software package remains safe requires knowing if these include publicly disclosed vulnerabilities. Again this requires knowledge of the contents.  When consumers do not know the contents of their digital products, they can not know if they are at risk and need to take action.

The Software Bill of Materials  is a proposal that was identified as a critical instrument for meeting these challenges and securing software supply chains in the Executive Order on Improving the Nation’s Cybersecurity} by the Biden Administration (EO 14028. In this presentation Camp will introduce SBOMs, provide examples, and explain the components that are needed in the marketplace for this initiative to meet its potential.

Jean Camp is a Professor at Indiana University with appointments in Informatics and Computer Science.  She is a Fellow of the AAAS (2017), the IEEE (2018), and the ACM (2021).  She joined Indiana after eight years at Harvard’s Kennedy School. A year after earning her doctorate from Carnegie Mellon she served as a Senior Member of the Technical Staff at Sandia National Laboratories. She began her career as an engineer at Catawba Nuclear Station after a double major in electrical engineering and mathematics, followed by a MSEE in optoelectronics at University of North Carolina at Charlotte.

L. Jean Camp Professor at Indiana University
Seminars
-
Aleksandra Kuczerawy headshot on a blue background with text European Developments in Internet Regulation

Join the Program on Democracy and the Internet (PDI) and moderator Daphne Keller, in conversation with Aleksandra Kuczerawy for European Developments in Internet Regulation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford-affiliation required) and virtual attendance (open to the public) is available; registration is required.

The Digital Services Act is a new landmark European Union legislation addressing illegal and harmful content online. Its main goals are to create a safer digital space but also to enhance protection of fundamental rights online. In this talk, Aleksandra Kuczerawy will discuss the core elements of the DSA, such as the layered system of due diligence obligations, content moderation rules and the enforcement framework, while providing underlying policy context for the US audience.

Aleksandra Kuczerawy is a postdoctoral scholar at the Program on Platform Regulation and has been a postdoctoral researcher at KU Leuven’s Centre for IT & IP Law and is assistant editor of the International Encyclopedia of Law (IEL) – Cyber Law. She has worked on the topics of privacy and data protection, media law, and the liability of Internet intermediaries since 2010 (projects PrimeLife, Experimedia, REVEAL). In 2017 she participated in the works of the Committee of experts on Internet Intermediaries (MSI-NET) at the Council of Europe, responsible for drafting a recommendation by the Committee of Ministers on the roles and responsibilties of internet intermediaries and a study on Algorithms and Human Rights.

Daphne Keller
Aleksandra Kuczerawy Postdoctoral Scholar at the Program on Platform Regulation (PPR)
Seminars
-

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Aleksandra Kuczerawy for European Developments in Internet Regulation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford affiliation only) and virtual attendance (open to public) is available; registration is required.

Aleksandra Kuczerawy is a postdoctoral scholar at the Program on Platform Regulation and has been a postdoctoral researcher at KU Leuven’s Centre for IT & IP Law and is assistant editor of the International Encyclopedia of Law (IEL) – Cyber Law. She has worked on the topics of privacy and data protection, media law, and the liability of Internet intermediaries since 2010 (projects PrimeLife, Experimedia, REVEAL). In 2017 she participated in the works of the Committee of experts on Internet Intermediaries (MSI-NET) at the Council of Europe, responsible for drafting a recommendation by the Committee of Ministers on the roles and responsibilties of internet intermediaries and a study on Algorithms and Human Rights.

Aleksandra Kuczerawy Postdoctoral Scholar at the Program on Platform Regulation (PDI)
Seminars
-
chenyan jia headshot on flyer

Join the Program on Democracy and the Internet (PDI) and moderator Nate Persily, in conversation with Chenyan Jia for The Evolving Role of AI In Political News Consumption: The Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation.

This session is part of the Fall Seminar Series, a months-long series designed to bring researchers, policy makers, scholars and industry professionals together to share research, findings and trends in the cyber policy space. Both in-person (Stanford affiliation only) and virtual attendance (open to the public) is available; registration is required.

Chenyan Jia (Ph.D., The University of Texas at Austin) is a postdoctoral scholar in The Program on Democracy and the Internet (PDI) at Stanford University. In 2023 Fall, she will be joining Northeastern University as an Assistant Professor in the School of Journalism in the College of Arts, Media, and Design with a joint appointment in the Khoury College of Computer Sciences. She has been working as a research assistant for UT's Human–AI Interaction Lab.

Her research interests lie at the intersection of communication and human-computer interaction. Her work has examined (a) the influence of emerging media technologies such as automated journalism and misinformation detection algorithms on people’s political attitudes and news consumption behaviors; (b) the political bias in news coverage through NLP techniques; (c) how to leverage AI technologies to reduce bias and promote democracy.

Her research has appeared in mass communication journals and top-tier AI and HCI venues including Human-Computer Interaction Journal (CSCW), Journal of Artificial Intelligence, International Journal of Communication, Media and Communication, ICLR, ICWSM, EMNLP, ACL, and AAAI. Her research has been awarded the Best Paper Award at AAAI 21. She was the recipient of the Harrington Dissertation Fellowship and the Dallas Morning News Graduate Fellowship for Journalism Innovation.

YOUTUBE RECORDING

Chenyan Jia Postdoctoral Scholar at the Program on Democracy and the Internet (PDI) 
Seminars
Subscribe to Environment