If you’re an accessibility nerd like me, or just curious about assistive technology, you’ll love Auto-Vo. Auto-vo is a node module and CLI for automatic testing of web content on macOS using the VoiceOver screen reader.

I created Auto VO to make it easier for developers, project managers, and QA to do repeatable automated testing more quickly using true assistive technology without the scare factor of learning how to use a screen reader.

Let’s get started!

First, let’s take a look at it in action, and then I’ll explain how it works in more detail. Here’s the text for all VoiceOver output by running auto-vo CLI on SmashingMagazine.com.

$ auto-vo --url https://smashingmagazine.com --limit 200 > output.txt $ cat output.txt link Jump to all topics link Jump  to list of all articles link image Smashing Magazine list 6 items link Articles link Guides 2 of 6 link Books 3 of 6 link Workshops 4 of 6 link Membership 5 of 6 More menu pop up collapsed button 6 of 6 end of list end of navigation ... (truncated)Copy the code

It looks like a reasonable page structure: we have skipped navigation links, well-structured lists, and semantic navigation. Great job! But let’s dig a little deeper. What is the structure of the title?

$ cat output.txt | grep heading heading level 2 link A Complete Guide To Accessibility Tooling heading level 2 link Spinning Up Multiple WordPress Sites Locally With DevKinsta heading level 2 link Smashing Podcast Episode 39 With Addy Osmani: Image Optimization heading level 2 2 items A SMASHING GUIDE TO Accessible Front-End Components heading level 2 2 items A  SMASHING GUIDE TO CSS Generators & Tools heading level 2 2 items A SMASHING GUIDE TO Front-End Performance 2021 heading Level 4 LATEST POSTS Heading level 1 link When CSS Isn't Enough: JavaScript Requirements For Accessible Components heading level 1 link Web Design Done Well: Making Use Of Audio heading level 1 link Useful Front-End Boilerplates And Starter Kits heading level 1 link Three Front-End Auditing Tools I Discovered Recently heading level 1 link Meet :has, A Native CSS Parent Selector (And More) heading level 1 link From AVIF to WebP: A New Smashing Book By Addy OsmaniCopy the code

Does not! Does not! Does not! Does not! Does not! Does not! Does not! Does not! Does not! Does not! Does not! Um our title hierarchy is a bit out of whack. We should see an outline, with a heading at the first level, followed by an ordered hierarchy. Instead, we see a jumble of level 1, level 2 and a false level 4. This is important to be aware of because it affects the screen reader user’s experience of browsing the page.

Treating the output of a screen reader as text is great because this analysis is made easier.

Some background

VoiceOver is a screen reader for MacOS. Screen readers let people read the app’s content out loud and interact with it. This means that people with low vision or who are blind could theoretically access content, including online content. But in practice, 98 percent of landing pages on the web have obvious errors that can be caught through automated testing and censorship.

There are a number of automated testing and review tools available, including AccessLint.com for automated code review (disclosure: I built AccessLint), and Axe, a common automation tool. These tools are important and useful, but they are only part of the picture. Manual testing is just as important, if not more so, but it’s also more time consuming and can be intimidating.

You’ve probably heard instructions like “Just turn on your screen reader and listen” to give you a sense of the blind experience. I think that’s misleading. Screen readers are complex applications that can take months or years to master and can be overwhelming at first. Using it indiscriminately to simulate the experience of a blind person can lead to feeling sorry for the blind person or, worse, trying to “fix” the experience in the wrong way.

I’ve seen people panic when they start VoiceOver and don’t know how to turn it off. Auto-vo manages the screen reader life cycle for you. It automatically starts, controls, and closes VoiceOver, so you don’t have to. Instead of trying to listen and keep up, you return the output as text, which you can then read, evaluate, and capture for later reference in bugs or for automatic snapshots.

Method of use

Matters needing attention

Currently, because AppleScript is required to be enabled for VoiceOver, this may require a custom configuration of the CI builder to run.

Scenario 1: QA and acceptance

Suppose I (the developer) have a design with blue line comments — the designer has added descriptions like accessible names and roles. Once I’ve set up the feature and reviewed the markup in Chrome or Firefox development tools, I want to output the results to a text file so that when I mark the feature as finished, my PM can compare the screen reader output to the design specifications. I can use auto-vo CLI and output the results to a file or terminal. We saw an example of this earlier in this article.

$ auto-vo --url https://smashingmagazine.com --limit 100
Copy the code
Scenario 2: Test-driven development

Here I am again as a developer, building my functionality with the blue line comment design. I want to test the content so I don’t have to refactor the markup to match the design afterwards. I can do this using the Auto-Vo node module imported into my preferred test runner, such as Mocha.

$ npm install --save-dev auto-vo
import { run } from 'auto-vo';
import { expect } from 'chai';

describe('loading example.com', async () => {
  it('returns announcements', async () => {
    const options = { url: 'https://www.example.com', limit: 10, until: 'Example' };

    const announcements = await run(options);

    expect(announcements).to.include.members(['Example Domain web content']);
  }).timeout(5000);
});
Copy the code

Under the shell

Auto-vo uses a combination of shell scripts and AppleScript to drive VoiceOver. While researching the VoiceOver application, I found a CLI that supports launching VoiceOver from the command line.

/System/Library/CoreServices/VoiceOver.app/Contents/MacOS/VoiceOverStarter
Copy the code

Then, a series of JavaScript executable file management AppleScript instructions navigate and capture VoiceOver announcements. For example, the script takes the last sentence from a screen reader announcement.

function run() {
  const voiceOver = Application('VoiceOver');
  return voiceOver.lastPhrase.content();
}
Copy the code

conclusion

I’d love to hear about your experiences with Auto-vo and welcome to contribute on GitHub. Give it a try and let me know how it works