5 min read

Notes from "Towards Automated Accessibility Report Generation for Mobile Apps"

Abstract diagram of repeated geometric objects resembling vertebrae and their overview
Towards Automated Accessibility Report Generation for Mobile Apps
Many apps have basic accessibility issues, like missing labels or low contrast. Automated tools can help app developers catch basic issues…

One of the most commonly used tools for generating audits on Apple platforms is the Accessibility Inspector, specially its audit functionality. The fact that this tool does not work for visionOS, in addition to many of the other reasons highlighted in this paper, decreased my use of it to nearly zero. But, there is still this feeling of void, a need for a kind of tool or a pipeline that provides for a better overview of the accessibility implementation on your app while also being more proactive with reduced friction on iterations. Currently, testing accessibility is such an extensive task given the wide range of accessibility settings available together with all the possible combinations of systems and devices, that it is practically impossible to test on every release, leaving the door open to regressions or, worse, neglecting the step altogether.

This is a where this type of research is welcome; the proposal involves using a novel mix of already existing tools powered by more efficiently trained machine learning models fine-tuned to address present limitations.

Quotes from the abstract:

developed a system to generate whole app accessibility reports by combining varied data collection methods (e.g., app crawling, manual recording) with an existing accessibility scanner
developed a screen grouping model with 96.9% accuracy (88.8% F1-score) and UI element matching heuristics with 97% accuracy (98.2% F1-score).
combined these technologies in a system to report and summarize unique issues across an app, and enable a unique pixel-based ignore feature to help engineers and testers better manage reported issues across their app’s lifetime

Main limitations of current approaches

Common implementations rely on app crawlers that randomly or through record & replay approaches crawl an app to detect accessibility issues

  1. They rely on accessible view hierarchies to drive the crawling itself which prior work has demonstrated to be often incomplete or unavailable for highly inaccessible apps
  2. None of these works has yet studied how users interact with an interpret information from these accessibility reports, and what features are important in an accessibility report generation tool.

Identified user needs for an accessibility report generation system

  1. Reduce the time required for developers to manually scan individual screens with accessibility auditing tools.
  2. Provide developers with an overall app accessibility report.
  3. Enable developers to reduce noise by ignoring false positive or previously addressed issues.

Observations

  • While the participants sometimes wrote accessibility tests and used automated scanners, they reported primarily manually testing their apps
  • Current tools provide no results overview
  • Current tools are too noisy
  • Manual scanning introduces inefficiencies
  • Accessibility regressions can be created and persist for quite some time
  • The reports contained many dynamic type issues, missing labels, poor contrast, and small target size, but some issues participants found are not detectable by our system and participants found them through manual testing.
  • Participants may benefit from having a mode that supports both automated exploration and control over which user scenarios and tasks are explored by the app crawler.
  • Supporting triaging and marking issues as ignored over time was also noted as an important feature in long term use, as participants mentioned they might be likely to file bugs or find issues in the report that they would mark as “won’t fix” or “minor issues”. These issues should then be filtered out of future reports automatically.
  • One participant in the study is a screen reader user noted that for very inaccessible apps, our system would let them generate a more complete report compared to manual scanning tools that require use of the VoiceOver screen reader to navigate the app to each screen for auditing. Unexposed elements in very inaccessible apps might cause them to miss auditing key areas of the apps they would be unable to navigate.
Figure 1: The prototype interactive HTML report generated by our report generation system. The interface has three main areas: a) A carousel displaying all screens captured in the report, b) A menu to toggle between the 'Summary' view which is currently selected, and screen-by-screen views represented by thumbnail screenshots, and c) a table of summarized results of issues found, grouped by category. The report provides actions users can take including ignoring, viewing suggestions, filing bugs, and expanding the screenshot.

This type of research highlights how artificial intelligence approaches can be used to generate solutions to previously unsolvable problems, and seeds the next generation of accessible tools.


Teaching Accessible Computing
For computing to work for everyone, it must be accessible to everyone. Alas, it is not: people with disabilities in mobility, vision, hearing, learning, attention, and more regularly face software that is hard or impossible for them to use. One reason for this is that when we educate future software engineers, we rarely teach them anything about accessibility. This limits their ability to find and fix accessibility defects and advocate to their organization to prioritize those fixes. More importantly, it limits the capacity of software organizations to design software that is accessible from day one. This book addresses this problem by offering concrete pedagogical ideas for educators about how to integrate accessibility into their computer science classes. It teaches basic foundations of accessibility that are relevant to major areas of computer science teaching, and then presents teaching methods for integrating those topics into course designs. Our hope is that computer science teachers will be able to read the first few introductory chapters, and the chapters relevant to their teaching, and use their learning to teach accessible computing in their classes. This book is a living document! If you’d like to be notified of future updates, or if you’re interested in contributing a chapter in your area of expertise, please let us know through the [Teaching Accessible Computing book Interest Form|https://forms.gle/a7KDmxnoyvi5ueUu9]. If you have suggestions for improvement, send them to our lead editor, [Alannah Oleson|https://alannaholeson.com/]. Many people contributed time, effort, and expertise to this book beyond just the authors and editors, including those listed in the [Acknowledgements|Acks] chapter.