A recent blog post from Sitemorse entitle "Why are sites ranked lower by Sitemorse than Shaw Trust and AbilityNet" addressed the issue of low scores for sites that are tested with their automated compliance checking tool. They explain that the low score their tool gives sites arises from the fact that their tool checks for issues in areas other than pure accessibility (WCAG2.0) compliance – for example valid HTML, Spelling, broken links etc.
In fact any enterprise checking tool can scan for, and highlight, issues in these other areas (such as code compliance, privacy, and link validation etc). When you include more areas of compliance in your automated scan you are bound to get more issues highlighted than if you restricted your scan to one area alone.
In addition, in any automated scan you will also get a raft of extra warnings – potential errors that need to be confirmed or rejected by a manual review. When scanning an entire website, comprising thousands of pages, the only way to confirm the status of these warnings (and weed out those that are false) is to do a full manual audit of a representative cross–section of the site and use this information as an indication of the compliance picture site–wide.
Thus it's no wonder that you see a lower score for a site's overall compliance when you are doing a broad automated scan and taking the results on face value alone.
To illustrate the need for a manual review to validate the findings of an automated scan we used our enterprise automated compliance checking tool of choice, 'Compliance Sheriff', and ran a scan on the very page where this blog post is to be found (http://blog.sitemorse.com/2009/06/why-are-sites-ranked-lower-by.html). The scan highlighted 10 definite accessibility errors – 4 in priority level 1, 2 in level 2 and 4 in level 3. However, taking a manual audit approach (and making sure your methodology includes testing using actual assistive technologies) you find that this page is, in practice, not bad at all.
Whilst automated checking tools, such as Compliance Sheriff, are invaluable for quick site–wide scans that can provide a wealth of information on areas such as accessibility, it is only when such a scan is followed up with a manual review that you can be sure you have the whole picture.
We'd actually go one step further than this and say that a combined automated and manual auditing process needs to be augmented by disabled end user testing. Our testers, having a range of impairments and assistive technologies, recently tested a site that was strictly AA–compliant (and hence would have had a clean bill of health by any sensible auditing methodology) but almost every tester was unable to complete the majority of the tasks provided. This was because accessibility doesn't guarantee usability – neither for disabled testers nor able–bodied testers alike. There is no real substitute for testing accessibility with actual assistive technologies – preferably wielded by actual end users.
So when you receive a report from AbilityNet you know the issues therein are the whole story and without superfluity. You have the best that an automated scan can offer, validated by an expert manual review, and augmented by real–life testing by disabled end users. You also get recommended solutions tailored to your content and not simply boilerplate text.
Phew – what a relief there's such a service out there! Contact us right away and we'll get cracking on your site, intranet or app and together we'll really make things happen.