Authors
Stanford Internet Observatory
News Type
Blogs
Date
Paragraphs

Risk and harm are set to scale exponentially and may strangle the opportunities generational technologies create. We have a narrow window and opportunity to leverage decades of hard won lessons and invest in reinforcing human dignity and societal resilience globally.

That which occurs offline will occur online, and increasingly there is no choice but to engage with online tools even in a formerly offline space. As the distinction between “real” and “digital” worlds inevitably blurs, we must accept that the digital future—and any trustworthy future web—will reflect all of the complexity and impossibility that would be inherent in understanding and building a trustworthy world offline.

Scaling Trust on the Web, the comprehensive final report of the Task Force for a Trustworthy Future Web, maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces. It highlights where existing approaches will not adequately meet future needs, particularly given emerging metaversal and generative AI technologies. Most importantly, it identifies immediate interventions that could catalyze safer, more trustworthy online spaces, now and in the future.

We are at a pivotal moment in the evolution of online spaces. A rare combination of regulatory sea change that will transform markets, landmarks in technological development, and newly consolidating expertise can open a window into a new and better future. Risk and harm are currently set to scale and accelerate at an exponential pace, and existing institutions, systems, and market drivers cannot keep pace. Industry will continue to drive rapid changes, but also prove unable or unwilling to solve the core problems at hand. In response, innovations in governance, research, financial, and inclusion models must scale with similar velocity.

While some harms and risks must be accepted as a key principle of protecting the fundamental freedoms that underpin that society, choices made when creating or maintaining online spaces generate risks, harms, and beneficial impacts. These choices are not value neutral, because their resulting products do not enter into neutral societies. Malignancy migrates, and harms are not equally distributed across societies. Marginalized communities suffer disproportionate levels of harm online and off. Online spaces that do not account for that reality consequently scale malignancy and marginalization.

Within industry, decades of “trust and safety” (T&S) practice has developed into a field that can illuminate the complexities of building and operating online spaces. Outside industry, civil society groups, independent researchers, and academics continue to lead the way in building collective understanding of how risks propagate via online platforms—and how products could be constructed to better promote social well-being and to mitigate harms.

Read More

pictures of attendees from the 2022 Trust and Safety Research Conference.
News

Registration Open for the 2023 Trust and Safety Research Conference

Tickets on sale for the Stanford Internet Observatory’s Trust and Safety Research to be held September 28-29, 2023. Lock in early bird prices by registering before August 1.
Registration Open for the 2023 Trust and Safety Research Conference
stanford dish at sunset
Blogs

Addressing the distribution of illicit sexual content by minors online

A Stanford Internet Observatory investigation identified large networks of accounts, purportedly operated by minors, selling self-generated illicit sexual content. Platforms have updated safety measures based on the findings, but more work is needed.
Addressing the distribution of illicit sexual content by minors online
Purple text with the opening language of the PATA Bill on an orange background.
Blogs

Platform Accountability and Transparency Act Reintroduced in Senate

Published in Tech Policy Press
Platform Accountability and Transparency Act Reintroduced in Senate
All News button
1
Subtitle

The report from the Task Force for a Trustworthy Web maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces

Paragraphs

In July and August 2022, Twitter and Meta removed two overlapping sets of accounts for violating their platforms’ terms of service. Twitter said the accounts fell foul of its policies on “platform manipulation and spam,” while Meta said the assets on its platforms engaged in “coordinated inauthentic behavior.” After taking down the assets, both platforms provided portions of the activity to Graphika and the Stanford Internet Observatory for further analysis.

All Publications button
1
Publication Type
Case Studies
Publication Date
Authors
Graphika
Stanford Internet Observatory
Authors
Stanford Internet Observatory
News Type
Blogs
Date
Paragraphs

In July and August 2022, Twitter and Meta removed two overlapping sets of accounts for violating their platforms’ terms of service. Twitter said the accounts fell foul of its policies on “platform manipulation and spam,” while Meta said the assets on its platforms engaged in “coordinated inauthentic behavior.” After taking down the assets, both platforms provided portions of the activity to Graphika and the Stanford Internet Observatory for further analysis.

Our joint investigation found an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that used deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia. The platforms’ datasets appear to cover a series of covert campaigns over a period of almost five years rather than one homogeneous operation. 

These campaigns consistently advanced narratives promoting the interests of the United States and its allies while opposing countries including Russia, China, and Iran. The accounts heavily criticized Russia in particular for the deaths of innocent civilians and other atrocities its soldiers committed in pursuit of the Kremlin’s “imperial ambitions” following its invasion of Ukraine in February this year. A portion of the activity also promoted anti-extremism messaging.

We believe this activity represents the most extensive case of covert pro-Western influence operations on social media to be reviewed and analyzed by open-source researchers to date. With few exceptions, the study of modern influence operations has overwhelmingly focused on activity linked to authoritarian regimes in countries such as Russia, China, and Iran, with recent growth in research on the integral role played by private entities. This report illustrates the much wider range of actors engaged in active operations to influence online audiences.

At the same time, Twitter and Meta’s data reveals the limited range of tactics influence operation actors employ; the covert campaigns detailed in this report are notable for how similar they are to previous operations we have studied. The assets identified by Twitter and Meta created fake personas with GAN-generated faces, posed as independent media outlets, leveraged memes and short-form videos, attempted to start hashtag campaigns, and launched online petitions: all tactics observed in past operations by other actors. 

Importantly, the data also shows the limitations of using inauthentic tactics to generate engagement and build influence online. The vast majority of posts and tweets we reviewed received no more than a handful of likes or retweets, and only 19% of the covert assets we identified had more than 1,000 followers.

Read More

newsfront logo on a faded yellow background.
Blogs

Pro-Kremlin Twitter Network Takes Aim at Ukraine and COVID-19

Twitter suspended a network of accounts that coordinated to promote narratives around the coronavirus pandemic, and to amplify a pro-Russian news site ahead of the invasion of Ukraine.
Pro-Kremlin Twitter Network Takes Aim at Ukraine and COVID-19
A graphic depiction of a face falling towards the ground on a red background overlayed with a black satellite dish and the word "takedown".
Blogs

Mind Farce

An Investigation into an Inauthentic Facebook and Instagram Network Linked to an Israeli Public Relations Firm
Mind Farce
twitter takedown headliner
Blogs

Analysis of February 2021 Twitter Takedowns

In this post and in the attached reports we investigate a Twitter network attributed to actors in Armenia, Iran, and Russia.
Analysis of February 2021 Twitter Takedowns
All News button
1
Subtitle

Stanford Internet Observatory collaborated with Graphika to analyze a large network of accounts removed from Facebook, Instagram, and Twitter in our latest report. This information operation likely originated in the United States and targeted a range of countries in the Middle East and Central Asia.

Subscribe to Kazakhstan