Categories
Accessibility Technology

Karl Groves: Choosing An Automated Accessibility Testing Tool

Karl Groves @karlgroves

This is my first time seeing Karl IRL, I’ve followed his posts pretty regularly over the last few years, so I’m excited for this session.  Karl briefly talked about being a Viking and some of that lore was colorful.  Sweet, I’m in 🙂

Also talked about testing tools and asked the audience about the kinds of tools people are using.  Examples:  ACCVerify, Firebug, WAVE toolbar, and so on.

Automated testing has an unbeatable cost per defect cost ratio.  However, it finds things it’s designed to find.  Find things that are machine-testable.  Some of the earlier 1st gen toolss:  LIFT, Insight, ACCVerify, Bobby, etc.  These were often desktop software and typically searched strings.  Karl lost a job with the GAWDS due to a 1st generation tool that rejected his site because of an automated tool’s report.  Blech!

2nd gen tools have a lot of differences.  They’re almost universally web-based because they’re easier to deal with, and more importantly, eliminate the problem of a lack of collaboration engendered by 1st gen tools.  They’re also more efficient at managing testing criteria, thresholds and standards.  They do have weaknesses though.  They can’t test every single thing, you’ll never get full coverage (in fact, only as much as 17% at best!), and some WCAG success criteria cannot be tested AT ALL.  That’s where manual testing (human review) comes in.  Also, you’re never quite sure what they’re actually testing:  the DOM or the string.  Accessibility APIs work with the browser and assistive technology.  If you’re not testing the DOM, you’re not testing the user experience.

QUOTABLE QUOTE:  “Your HTML is a polite request to the browser”

Primary Criteria for selection

  1. Is the tool user friendly?
  2. Quality & reliability of results (does it generate a lot of false positives?)
  3. Does it test the DOM?
  4. Is it web based?
  5. Integration (these tools are generally separate systems)
  6. Does it spider?  (this is useful for sites that constantly add tons of new content)
  7. Does it test uploaded / submitted source? (gets developers to test code while they’re working)
  8. Monitoring and Retesting (keep an eye on things)
  9. Manual Test Guidance (does it give us info on how to test suspicious stuff?)
  10. Standards and test management (can we make the tool work for us?)

MAKING YOUR CHOICE

  • Take your time
  • Take a test drive (ask for full working license for 30 days…don’t just sit through a 1 hour canned review)
  • Demand satisfaction

QUESTIONS

  • What about users with different / older versions or client settings?  At some level, the organization has to draw a line in the sand about what they support.
  • There’s a W3C effort to crowdsource testing tools, criteria and thresholds
  • Can you talk about the tools you currently use and have used?  Each has their benefits and drawbacks.  Like FireEyes, Favelets.
  • Given resources, how do you weigh buy versus build?  Tool needs to work the way I need it to work.  Is there open source stuff available?  Peter:  might be one in Greece, Open Ajax tool and open rulesets.  U of Washington “before and after” examples.
  • Another criteria:  does tool upload test data to a remote server?  This could be a problem for some.
  • Karl has a whitepaper available:  to get it, e-mail him at karl@simplyaccessible.com

 

 

 

 

By Paul Schantz

CSUN Director of Web & Technology Services, Student Affairs. husband, father, gamer, part time aviator, fitness enthusiast, Apple fan, and iguana wrangler.

3 replies on “Karl Groves: Choosing An Automated Accessibility Testing Tool”

%d bloggers like this: