Risk and harm are set to scale exponentially and may strangle the opportunities generational technologies create. We have a narrow window and opportunity to leverage decades of hard won lessons and invest in reinforcing human dignity and societal resilience globally.
That which occurs offline will occur online, and increasingly there is no choice but to engage with online tools even in a formerly offline space. As the distinction between “real” and “digital” worlds inevitably blurs, we must accept that the digital future—and any trustworthy future web—will reflect all of the complexity and impossibility that would be inherent in understanding and building a trustworthy world offline.
Scaling Trust on the Web, the comprehensive final report of the Task Force for a Trustworthy Future Web, maps systems-level dynamics and gaps that impact the trustworthiness and usefulness of online spaces. It highlights where existing approaches will not adequately meet future needs, particularly given emerging metaversal and generative AI technologies. Most importantly, it identifies immediate interventions that could catalyze safer, more trustworthy online spaces, now and in the future.
We are at a pivotal moment in the evolution of online spaces. A rare combination of regulatory sea change that will transform markets, landmarks in technological development, and newly consolidating expertise can open a window into a new and better future. Risk and harm are currently set to scale and accelerate at an exponential pace, and existing institutions, systems, and market drivers cannot keep pace. Industry will continue to drive rapid changes, but also prove unable or unwilling to solve the core problems at hand. In response, innovations in governance, research, financial, and inclusion models must scale with similar velocity.
While some harms and risks must be accepted as a key principle of protecting the fundamental freedoms that underpin that society, choices made when creating or maintaining online spaces generate risks, harms, and beneficial impacts. These choices are not value neutral, because their resulting products do not enter into neutral societies. Malignancy migrates, and harms are not equally distributed across societies. Marginalized communities suffer disproportionate levels of harm online and off. Online spaces that do not account for that reality consequently scale malignancy and marginalization.
Within industry, decades of “trust and safety” (T&S) practice has developed into a field that can illuminate the complexities of building and operating online spaces. Outside industry, civil society groups, independent researchers, and academics continue to lead the way in building collective understanding of how risks propagate via online platforms—and how products could be constructed to better promote social well-being and to mitigate harms.