Copyright © 2017 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and document use rules apply.
This document describes features that web authoring and quality assurance tools can incorporate, so that they support the evaluation of accessibility requirements, such as those defined by the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness of such tool features and to provide introductory guidance for tool developers on what kind of features they could provide in future implementations of their tools. This listing of features could also be used to help compare different types of evaluation tools, for example during the procurement of such tools.
The features in scope of this document include capabilities to help specify, manage, carry out and report the results from web accessibility evaluations. For example, some of the described features relate to crawling of websites, interacting with tool users to carry out semi-automated evaluation, and providing evaluation results in a machine-readable format. This document does not describe the evaluation of web content features, which is addressed by WCAG 2.0 and its supporting documents.
This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document does not prioritize nor require any particular accessibility evaluation feature or specific type of evaluation tools. It describes features that can be provided by tools that support fully-automated, semi-automated, and manual web accessibility evaluation. Following this document can help tool developers to meet accessibility checking requirements defined by the Authoring Tool Accessibility Guidelines (ATAG).
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This Developers' Guide to Features of Web Accessibility Evaluation Tools is published as a W3C Working Group Note because the Evaluation and Repair Tools Working Group (ERT WG) reached the end of its Charter.
This document is regarded to be fairly complete but it has not had sufficient review before the group needed to close. In particular, there may be missing features and profiles in this listing.
If you wish to make comments regarding this Developers' Guide to Features of Web Accessibility Evaluation Tools document, please send them to [email protected] (publicly visible mailing list archive).
Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 1 September 2015 W3C Process Document.
Designing, developing, monitoring, and managing a website typically involves a variety of tasks and people who use different types of tools. For example, a web developer might use an integrated development environment (IDE) to create templates for a content management system (CMS), while a web content author will typically use the content-editing facility provided by the CMS to create and edit the web pages. Ideally all these tools provide features to support everyone involved throughout the process in evaluating accessibility. For example, an IDE could provide functionality to check document fragments so that the developer can check individual web page components during their development, and a CMS could provide functionality to customize the accessibility checks that are automatically carried out to help monitor the quality of the website. This document lists and describes these types of features that can be provided by tools to support accessibility evaluation in a variety of situations and contexts.
In the context of this document, an evaluation tool is a (web-based or non-web-based) software application that enables its users to evaluate web content according to specific quality criteria, such as web accessibility requirements. This includes but is not limited to the following (non-mutually-exclusive) types of tools:
Note that these terms are not mutually exclusive. A web accessibility evaluation tool is a particular type of web quality assurance tool. In some cases it can be part of a web authoring tool, or considered to be a web authoring tool depending on the functionality that it provides (see the ATAG 2.0 definition for authoring tool). Also, a web quality assurance tool might not check for accessibility criteria but might provide other functionality, such as for managing quality assurance processes and reporting evaluation results, that may be useful for web accessibility evaluation. This document refers to any of these tools collectively as evaluation tools.
W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria such as the features listed in this document.
[Review Note: Feedback on this section is particularly welcome, specifically with suggestions for accessibility evaluation features that are not listed below and with comments to refine listed accessibility evaluation features.]
The features of an accessibility evaluation tool are presented in this section from different perspectives: the resource to be evaluated (i.e., web content and its linked resources, which enable its rendering to the user agent and end user), the testing requirements, the reporting customization capabilities of the tool and other tool usage characteristics like the integration into the development and editing workflow of the user.
The accessibility evaluation features listed and described below is not exhaustive. It may not be possible nor desired for a single tool to implement all of the listed features either. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. Developers can use this list to identify features that are relevant to their tools to plan their implementation. Also others interested in acquiring and using evaluation tools can use this document to learn about relevant features to look for.
This category includes features that help to retrieve and render different types of web content. There are tools that may retrieve the content to be analyzed from the file system or from a database. However, the majority of them do it via a network connection through the HTTP(S) protocol. This section focuses mostly on this latter scenario.
Due to the characteristics of the HTTP(S) protocol, the rendering of a web resource implies the manipulation and storage of many other components associated with it, like request and response headers, session information, cookies, authentication information, etc. These associated components are also considered in the following.
Although the majority of web resources are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or JavaScript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources and are therefore important for an accessibility evaluation. Accessibility evaluation tools should state which types of formats they support.
Some types of content formats include:
This feature identifies which content languages and encodings are supported by the evaluation tool. The web is a multilingual and multicultural space in which information can be presented in different languages, thus evaluation tools should be in the position to address this issue. Furthermore, web content can be transmitted using different character encodings and sets (like ISO-8859-1, UTF-8, UTF-16, etc.), which demands from the evaluation tools the capacity to handle them.
More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].
Many websites are generated dynamically by combining code templates with HTML snippets that are created by website editors. Evaluation tools may be integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE) to test these snippets as developers and/or editors create them.
Usually this is implemented in the evaluation tools by creating DOM document fragments [DOM] from these snippets. Evaluation tools may filter as well the accessibility tests according to their relevance to the document fragment.
Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of desktop applications and contain dynamic content and interface updates. Tools that evaluate such applications should emulate and record different user actions (e.g., activating interface components by clicking with the mouse, swiping with the fingers on a touch-screen or using the keyboard) that modify the status of the current page or load new resources. The evaluation tool needs to define and record these intermediate steps that can be later on interpreted by the tool (see section on web testing APIs).
Content negotiation is a characteristic of the HTTP(S) protocol that enables web servers to customize the representation of the requested resources according to the demands of the client user agent. Because of this, the identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient. To support content negotiation, the testing tool customizes and stores the HTTP headers according to different criteria (see discussion in
A cookie is a name-value pair that it is stored by the user-agent [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information exchanged between the client and the server, which as seen before may be relevant for content negotiation.
A tool that supports cookies may store the cookie information provided by the server in an HTTP response an reuse it in subsequent requests. It may also allow the user to manually set cookie information to be used with the HTTP requests.
Websites may require authentication (e.g., HTTP authentication, OpenID, etc.) to control access to given parts of the website or to present customized content to authenticated users.
A tool that supports authentication allows the user to provide their credentials beforehand, so that they are used when accessing protected resources, or it prompts the user to enter her credentials upon the server request. The tool may also support the use of different credentials for different parts of a web site.
Within HTTP, session information can be used for different purposes like, e.g., implementation of security mechanisms (login information, logout a user after a long inactivity period) or track the interaction paths of the users.
Session information can be stored in the user agent local storage, in the session ID in the URL or in a cookie, for example. An evaluation tool that supports session tracking should be able to handle these different scenarios.Some evaluation tools incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. There are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong results in the evaluation process.
A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:
This category includes features targeted to the configuration of the tests to be performed.
Accessibility evaluation tools may offer the possibility to select a given subset of evaluation tests or even a single one. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.
This feature shall not be confused with the fact that some tools are focused on testing a single characteristic of the web page, like for example, a tool to test color contrast.
According to the Evaluation and Report Language (EARL) specification [EARL10], there are three types of modes to perform accessibility tests:
There are some evaluation tools that support accessibility experts by performing semiautomatic or manual tests. This support is normally introduced by highlighting in the source code or in the rendered document areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).
Tools may keep provenance information (i.e., which part of the report was automatically generated by the tool and which was manually modified).
Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode.
Developers and quality assurance engineers need sometimes to implement their own tests. For that purpose, some tools define an API that helps developers to create their own tests, which respond to internal demands within their organization.
When evaluating accessibility of web sites and applications, it is sometimes desirable to create scripts that emulate user interaction. With the growing complexity of web applications, there is an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. Tools that support this API enable developers to write tests that automate the application's and the end-users' behaviour.
This category includes features related to the ability of the tool to present, store, import, export and compare the testing results in different ways. In this section the term report must be interpreted in its widest sense. It could be a set of computer screens presenting different tables and graphics, a set of icons superimposed on top of the content displayed to the user indicating different types of errors/warnings, a HTML document or a word processor document summarizing the evaluation results, etc.
Support for standard reporting languages like EARL [EARL10] is a requirement for many users. There are cases where tool users want to exchange results, compare evaluation results with other tools, import/export results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. The support for a standardized language facilitates those tasks.
Although they may not be considered standardized, some tools support exporting test results in other common formats like Comma-Separated Values (CSV) [CSV, TABDATA].
The implementation of monitoring features requires that the tool has a persistence layer (a database, for example) where results could be stored and retrieved at a later stage to compare different evaluation rounds.
In many evaluation methodologies, accessibility experts and quality assurance engineers use different tools. If the evaluation tool supports import and export of test results (for instance, in EARL format [EARL10], as JSON [JSON], in a CSV [CSV, TABDATA] file, etc.), the tool may be easily integrated in such environments.
This feature allows the customization of the resulting report according to different criteria, such as the target audience, the type of results, the part of the site being analyzed, the type of content, etc. This feature may also allow the developer or the accessibility expert to add additional comments in the report.
The presentation of evaluation results is influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page. For instance, accessibility errors may be listed for a whole web resource or presented for concrete components like images, videos, tables, forms, links, etc.
Conformance statements are demanded by many customers to assess quickly the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of accessibility techniques (i.e., common failures, sufficient techniques, etc.) and aggregate results as described in the previous section.
As described in Section 2.2.2, full accessibility compliance can only be achieved when manual testing has been implemented.
The majority of web developers have little or no knowledge about web accessibility. Tools may provide together with their reporting capabilities additional information to support developers and accessibility experts to correct the accessibility problems detected. This information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. If the evaluation tool is part of an authoring tool as described in the Authoring Tool Accessibility Guidelines 2.0 [ATAG], then the tool will meet its success criterion B.3.2.1.
This feature may include, for example, a guided step-by-step wizard which guides the evaluator to correct the problems found. Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.
This section includes characteristics that describe the integration into the development and edition workflow of the user or are targeted to the customization of different aspects of the tool depending on its audience, like for instance, user interface language, user interface functionality, user interface accessibility, etc.
Accessibility evaluation tools present different interfaces, which allow their integration into the standard development workflow of the user. The typical ones that can be highlighted are the following:
Localization and internationalization are important to address worldwide markets. Tool users may not be able to speak English and it is necessary to present the user interface (e.g., icons, text directionality, UI layout, units, etc.) and the reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N].
From the accessibility standpoint, it is recommended to use the authorized translations of the Web Content Accessibility Guidelines. It must be considered as well that some accessibility tests need to be customized to other languages, like for instance, such as those related to readability.
Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:
The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.
Although there is an international effort to harmonize web accessibility standards, there are still minor differences in accessibility requirements in different countries. The tool should specify in its documentation which policy environments are supported. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the accessibility standard most commonly referenced in policies worldwide.
Accessibility evaluation teams may include people with disabilities. To that end, it is relevant that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the running operating system. In such cases, compliance with the Authoring Tool Accessibility Guidelines 2.0 [ATAG] becomes an important feature to support both from the perspective of the user interface of the tool and the access to its results.
Additionally, when producing reports (for instance, in HTML format), it is important that they are accessible as well and in compliance with the Web Content Accessibility Guidelines 2.0 [WCAG20].
This section presents 3 examples of accessibility evaluation tools. They are provided for illustration purposes and do not represent an existing product. In every subsection, we will highlight some of the key features of the tool. The table at the end of the chapter summarizes and complements these textual descriptions.
Tool A is a browser plug-in, which can perform a quick automatic accessibility evaluation on a rendered HTML page. The main features of the tool are:
Table 1 presents an overview of the matching features as described in section 2.
Tool B is a large-scale accessibility evaluation tool used to analyze web sites with large volumens of content. The main features of the tool are:
Table 1 presents an overview of the matching features as described in section 2.
Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment based upon a virtual machine environment that emulates the accessibility API of some devices. The main features of the tool are:
Table 1 presents an overview of the matching features as described in section 2.
This section presents a tabular comparison the tool features described previously. They are provided for illustration purposes and do not represent an existing product.
Category | Feature | Tool A | Tool B | Tool C |
---|---|---|---|---|
Test subject and its environment | Content-types | HTML, CSS and JavaScript | HTML, CSS and JavaScript | HTML, CSS and JavaScript |
Content encoding and content language | ISO-8859-1, UTF-8, UTF-16; any language supported by these encodings | ISO-8859-1, UTF-8; any language supported by these encodings | ISO-8859-1, UTF-8; any language supported by these encodings | |
DOM Document fragments | no | no | no | |
Dynamic content | relies on browser capabilities | yes | relies on browser capabilities | |
Content negotiation | relies on browser capabilities; not configurable | yes | relies on browser capabilities; not configurable | |
Cookies | relies on browser capabilities; not configurable | configurable | relies on browser capabilities; not configurable | |
Authentication | relies on browser capabilities; not configurable | configurable | relies on browser capabilities; not configurable | |
Session tracking | relies on browser capabilities; not configurable | configurable | relies on browser capabilities; not configurable | |
Crawling | no | yes | no | |
Testing functionality | Selection of evaluation tests | no | yes | no |
Test modes: automatic, semiautomatic and manual | only automatic | all | all | |
Development of own tests and test extensions | no | no | no | |
Test automation | no | no | yes | |
Reporting and monitoring | Standard reporting languages | EARL | EARL | none |
Persistence of results | no | yes | no | |
Import/export functionality | EARL | EARL, CSV | no | |
Report customization | no | comments/results added by evaluator | no | |
Results aggregation | no | yes | no | |
Conformance | no | yes | no | |
Error repair | inline hints | in report | yes | |
Tool usage | Workflow integration | in browser | standalone | standalone |
Localization and internationalization | en | en, de, fr, es, jp | en | |
Functionality customization to different audiences | developers | developers, commissioners | developers | |
Policy environments | no | Section 508 (USA), BITV (Germany) | no | |
Tool accessibility | not accessible | accessible in MS Windows (MSAA) | not accessible |
The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Yod Samuel Martín, Philip Ackermann, Evangelos Vlachogiannis, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.
This publication was developed with support from the WAI-ACT project, co-funded by the European Commission IST Programme.