Accessibility for Trust and Safety Flows

How well do platform reporting flows and context labels work with screen readers for the visually impaired?
An illustration of the experience of using a poorly coded website with a screen reader. The illustration shows three mockups of a smartphone screen with each selectable choice saying only "button". Behind the buttons is a blurred image of a computer.

Recent revelations about social media and messaging platforms amplifying hate or fueling violence at the US Capitol and in other countries around the world have drawn attention to the many ways that social media platforms can facilitate harm and abuse. These issues, often described under the umbrella of “trust and safety,” include problems such as the online spread of misinformation, radicalization, dissemination of child abuse images and sex trafficking, or negative effects of content on mental health and well-being. Social media companies have taken a number of steps to improve trust and safety on their platforms, but how well do these strategies work for users who access social media through accessibility technologies?

We conducted a simple audit of screen readers across multiple devices and platforms to evaluate how they interact with misinformation that has been labelled or removed by platforms, as well as assess the functionality of using screen readers to report undesired, harmful or violative content. We found that there are significant challenges for individuals accessing social media with these technologies, from basic problems such as informational labels not being correctly read to more severe incidents wherein whole applications are rendered completely inaccessible via screen readers.

Disability Rights, Internet Use and Platform Policies

One billion people, or 15% of the global population, live with a disability. In the United States, more than 40 million people live with a disability, and at least 3.2 million Americans are visually impaired. However, there remains a significant digital divide between those who can benefit from digital technologies and those who cannot due to accessibility issues. Many people living with a disability face barriers for online communication and interaction: videos without captions are inaccessible for the deaf or hard of hearing; images without alternative text cannot be read aloud by screen readers used by the blind and those with low vision; and complex user interfaces make navigation difficult for those living with motor impairments. While technology has the potential to create numerous opportunities for social inclusion, it can also replicate innacessibility and discrimination experienced in the offline world.

In recent years, platform companies have taken several steps to make information accessible, sometimes through careful design, other times by retrofitting technologies for inclusion. Yet despite accessibility standards and best-practices, there are still large gaps in creating a fair and accessible information space. These challenges are further exacerbated by continuous updates by platforms which can, for example, break screen reader flow or make user interfaces too complicated for inclusion.  

Platform policies to improve trust and safety also affect individuals living with disabilities. The design and implementation of misinformation labels can, for example, affect platform efforts to label and fact check information. The inability of a user to report misinformation, harassment or other kinds of online harm can also affect user safety in a real and serious way.

Common Challenges using Screen Reader Technologies

Screen readers are products used by the blind or visually impaired to access on-screen computer content and websites. Users of screen reading technologies face a number of frequent obstacles: for one, applications and websites are in regular flux, with site or app updates frequently introducing new inaccessible functionality, breaking existing functionality, or introducing navigation dead ends (where a user navigates to a screen with no way to navigate back short of closing the application). Users are incentivized not to update their applications once they find a working version, which in and of itself presents a two-fold risk to users. First, they may miss out on newer trust and safety features introduced by platforms and applications. Second, they may remain on a version of the app with unpatched security flaws.

Given that accessibility is often an afterthought or “nice to have” rather than a design requirement, apps are frequently shipped with completely inconsistent screen reader compatibility. For example, in an application with numerous buttons, each visually labeled with an appropriate end result — “back,” “forward,” “submit,” “like” — it’s not uncommon for developers to overlook the alt text of the UI element, causing screen readers to read each one as simply “button,” if they read at all. In the case of abuse reporting mechanisms, this can make functionality largely impossible to use without the assistance of a sighted user. Given the intimate nature of some kinds of online harm, seeking such assistance may not be desirable or practical.

Lastly, and less frequently considered, is the issue of reading order. Particularly in a trust and safety labeling context (e.g. content warnings, misinformation labels), differences between visual and auditory perception can diminish the efficacy of these mechanisms. For example, for sighted users, a visual overlay/clickthrough may be placed on content, but without screen reader support — this can result in the screen reader ignoring the overlay and reading the underlying content. When it comes to content labels, labels are commonly placed after the content itself. While this is fine for users that can visually scan the screen and see that the content is labeled, for users of screen readers, this means that the entirety of content is read without this context, and then only pointed out as being false or misleading after it has already been read to the user. The impact of this is an interesting area for future study, but we hypothesize that it lessens the efficacy of the informational labeling.

Testing Screen Reader Functionality for Platform Trust & Safety

We conducted a surface level audit of social media and messaging apps to test their functionality with screen readers. For mobile devices and MacOS, we tested using the operating systems’ built in screen reader. For Windows we used the NVDA screen reader. Such an audit is challenging due to the constant revisions of applications and websites, meaning that some of our findings may be outdated by the time of publication or reading. However, we believe that the findings demonstrate the need for improvement accessibility across the app ecosystem. Here we have included a sample of the many hurdles we encountered during our test window:

  • Facebook Messenger (iOS): A typical user reporting flow involves selecting the username you wish to report and then selecting a reporting element from the subsequent screen. When attempting to do this via VoiceOver, the reporter can sporadically get stuck on the  username, where swiping left or right just repeats the username. This can necessitate killing and restarting the app to continue.

  • WhatsApp (macOS): In the typical reporting flow, after clicking “report,” the user is provided with two options in a pop-up. When using a screen reader, the reporter can click the “report” button but cannot navigate the popup afterward.

  • Signal (macOS): If messaged by an unknown user, no apparent way to navigate to the “block” option. When viewing a user’s details, the UI elements of the previous conversation window aren’t removed, making navigation confusing. Individual messages cannot be navigated to, and no reporting options exist.

  • Telegram (Windows, NVDA): Generally unusable. No keyboard navigation, and elements are not spoken correctly. No way to navigate to the block mechanism, no apparent reporting mechanism.

  • Instagram (iOS): Reporting for posts is unusable; the “meatball” context menu on posts cannot be navigated to with VoiceOver.

  • Instagram (Android): Cannot navigate to posts that are ‘unavailable’ in the app. There is no notice given to users that the post is unavailable; they are simply redirected to the user feed. 

  • Twitter (Android and iOS): On Android, posts labeled as election misinfo (Stay informed: Learn about US 2020 election security efforts), the label is not read by the screen reader. On iOS, trying to double tapping "Find out more" just repeats the label, and in some cases the tweet itself isn't read.

  • Facebook (Android): The ”missing context label” does not read. Users can click on a button to “see why” content has been labelled, but the screen reader also does not read the “see why” content once it is expanded.

Conclusion

Based on our experiences trying to navigate popular platforms with various screen reading technologies, accessibility remains poorly implemented for various trust and safety features, including informational labels and reporting mechanisms, reflecting an industry-wide problem where accessibility features are implemented in an ad-hoc fashion instead of built into the development process. We have compiled a list of preliminary recommendations for engineers and auditors to help improve the accessibility of informational labels and reporting mechanisms, to help make social media technologies a safe place for everyone.

Recommendations

  • Encourage developers, managers and testers to use their app via a screenreader on a regular basis. While a QA cycle should always include accessibility testing, having at least one or two days a year where sighted developers learn how to use VoiceOver or a similar technology and spend time attempting to navigate their app can help give much needed perspective.
  • Consider reading order when building T&S messaging elements — if possible, put content labels first in the reading order, even if they’re visually positioned after the content for sighted users.
  • The development, build and release process should have mechanisms to throw errors when elements are unlabeled/lacking alt text — code templates and snippets should have placeholders for labels and reading order. Use IDE mechanisms to highlight potential accessibility issues — for example, the iOS Accessibility Inspector or the Android Accessibility Scanner.
  • Perform regression tests with semi-automated scanning tools like GSCXScanner.