Wendy Chisholm,
Trace R & D Center,
University of Wisconsin -- Madison
Gregg Vanderheiden,
Trace R & D Center,
University of Wisconsin -- Madison
Ian Jacobs, W3C
These guidelines explain how to make Web content
accessible to people with
disabilities. The guidelines are intended for all Web content
developers (page authors and site designers) and for
developers of authoring tools. The primary goal of
these guidelines is to promote accessibility. However, following them
will also make Web content more available to all users, whatever
user agent
they are using (e.g., desktop
browser, voice browser, mobile phone, automobile-based
personal computer, etc.) or
constraints they may be operating under (e.g., noisy
surroundings, under- or over-illuminated rooms, in a hands-free
environment, etc.). Following these guidelines will also help people
find information on the Web more quickly. These guidelines do not
discourage content developers from using images, video, etc., but
rather explain how to make multimedia content more accessible to a wide
audience.
This is a reference document for accessibility principles and
design ideas. Some of the strategies discussed in this document
address certain Web internationalization and mobile access
concerns. However, this document focuses on accessibility and does not
fully address the related concerns of other
W3C Activities. Please
consult the W3C Mobile Access
Activity home page and the
W3C Internationalization
Activity home page for more information.
This document is meant to be stable and therefore does not provide
specific information about browser support for different technologies
as that information changes rapidly. Instead, the Web Accessibility Initiative
(WAI) Web
site provides such information (refer to [WAI-UA-SUPPORT]).
This document includes an appendix that organizes all of the checkpoints by topic and priority.
The checkpoints in the appendix link to their definitions in the
current document. The topics identified in the appendix include
images, multimedia, tables, frames, forms, and scripts. The appendix
is available as either a
tabular summary of checkpoints or as a simple list of checkpoints.
A separate document, entitled "Techniques for Web Content Accessibility Guidelines 1.0" ([TECHNIQUES]), explains how to implement
the checkpoints defined in the current document. The Techniques
Document discusses each checkpoint in more detail and provides
examples using the
Hypertext Markup Language (HTML),
Cascading Style Sheets (CSS),
Synchronized Multimedia Integration Language
(SMIL),
and the Mathematical Markup Language (MathML). The Techniques Document also includes
techniques for document validation and testing, and
an index of HTML elements and attributes (and which techniques
use them). The Techniques Document has been designed to
track changes in technology and is expected to be updated
more frequently than the current document.
Note. Not all browsers or multimedia tools may
support the features described in the guidelines. In particular, new
features of HTML 4.0 or CSS 1 or CSS 2 may not be supported.
"Web Content Accessibility Guidelines 1.0" is part of a series of accessibility guidelines
published by the Web Accessibility
Initiative. The series also includes User Agent Accessibility
Guidelines ([WAI-USERAGENT]) and
Authoring Tool Accessibility Guidelines ([WAI-AUTOOLS]).
This specification is a Superseded Recommendation. A newer specification exists that is recommended for new adoption in place of this specification.
This document remains available as a reference for old -- and
possibly still deployed -- implementations, but is not
recommended for future implementation. Implementers are strongly
encouraged to use the latest version of WCAG, which is
available from the Web Content
Accessibility Guidelines
(WCAG) Overview.
For those unfamiliar with accessibility issues pertaining to
Web page design, consider that many users may be operating
in contexts very different from your own:
They may not be able to see, hear, move, or
may not be able to process some types of
information easily or at all.
They may have difficulty reading or comprehending text.
They may not have or be able to use a keyboard or mouse.
They may have a text-only screen, a small screen, or
a slow Internet connection.
They may not speak or understand fluently
the language in which the document is written.
They may be in a situation where their eyes, ears, or
hands are busy or interfered with (e.g., driving to work,
working in a loud environment, etc.).
They may have an early version of a browser, a different
browser entirely, a voice browser, or a
different operating system.
Content developers must consider these
different situations during page design. While there are several
situations to consider, each accessible design choice generally
benefits several disability groups at once and the Web community as a
whole. For example, by using style
sheets to control font styles and eliminating the FONT element,
HTML authors will have more control over their pages, make those
pages more accessible to people with low vision, and by sharing the
style sheets, will often shorten page download times for all
users.
The guidelines discuss accessibility issues and provide
accessible design solutions. They address typical scenarios
(similar to the font style example) that may pose problems for users
with certain disabilities. For example, the first guideline explains how content
developers can make images accessible. Some users may not be able to see
images, others may use text-based browsers that do not support
images, while others may have turned off support for images (e.g.,
due to a slow Internet connection). The guidelines do not suggest
avoiding images as a way to improve accessibility. Instead,
they explain that providing a text
equivalent of the image will make it accessible.
How does a text equivalent make the image accessible? Both words
in "text equivalent" are important:
Text content can be presented to the user as synthesized speech,
braille, and visually-displayed text. Each of these three mechanisms
uses a different sense -- ears for synthesized speech,
tactile for braille, and eyes for visually-displayed text -- making
the information accessible to groups representing a variety of sensory
and other disabilities.
In order to be useful, the text must convey the same function or
purpose as the image. For example, consider a text equivalent for a
photographic image of the Earth as seen from outer space. If the
purpose of the image is mostly that of decoration, then the text
"Photograph of the Earth as seen from outer space" might fulfill the
necessary function. If the purpose of the photograph is to illustrate
specific information about world geography, then the text equivalent
should convey that information. If the photograph has been designed
to tell the user to select the image (e.g., by clicking on it)
for information about the earth, equivalent text would be
"Information about the Earth". Thus, if the text conveys the same
function or purpose for the user with a disability as the image does
for other users, then it can be considered a text equivalent.
Note that, in addition to benefitting users with disabilities, text
equivalents can help all users find pages more quickly, since
search robots can use the text when indexing the pages.
While Web content developers must provide text equivalents for
images and other multimedia content, it is the responsibility of user agents
(e.g., browsers and assistive technologies such as screen
readers, braille displays, etc.) to present
the information to the user.
Non-text
equivalents of text (e.g., icons, pre-recorded speech, or a
video of a person translating the text into sign language) can make
documents accessible to people who may have difficulty accessing
written text, including many individuals with cognitive disabilities,
learning disabilities, and deafness. Non-text equivalents of text can
also be helpful to non-readers. An auditory
description is an example of a non-text equivalent of
visual information. An auditory description of a
multimedia presentation's visual track benefits people who cannot
see the visual information.
By following these guidelines, content developers can create pages
that transform gracefully. Pages that transform gracefully remain
accessible despite any of the constraints described in the introduction, including physical, sensory,
and cognitive disabilities, work constraints, and technological
barriers. Here are some keys to designing pages that transform
gracefully:
Provide text (including text equivalents). Text
can be rendered in ways that are available to almost all browsing
devices and accessible to almost all users.
Create documents that work even if the user
cannot see and/or hear. Provide information
that serves the same purpose or function as
audio or video in ways
suited to alternate sensory channels as well.
This does not mean creating a prerecorded
audio version of an entire site to make it accessible to
users who are blind. Users who are blind can use
screen reader technology to
render all text information in a page.
Create documents that do not rely on one type of hardware.
Pages should be usable by people
without mice, with small screens, low resolution screens,
black and white screens, no screens, with only voice or text
output, etc.
The theme of graceful transformation is addressed primarily by
guidelines 1 to 11.
Content developers should make content understandable and
navigable. This includes not only making the language clear and
simple, but also providing understandable mechanisms for navigating
within and between pages. Providing navigation tools and orientation
information in pages will maximize accessibility and usability.
Not all users can make use of visual clues such as image
maps, proportional scroll bars, side-by-side frames, or graphics that
guide sighted users of graphical desktop browsers. Users also lose
contextual information when they can only view a portion of a page,
either because they are accessing the page one word at a time (speech
synthesis or
braille display), or one section at a time (small display,
or a magnified display). Without orientation information,
users may not be able to understand very large tables, lists, menus, etc.
The theme of making content understandable and navigable is
addressed primarily in guidelines 12 to
14.
This document includes fourteen guidelines, or general principles of
accessible design. Each guideline includes:
The guideline number.
The statement of the guideline.
Guideline navigation links. Three links allow
navigation to the next guideline (right arrow
icon), the previous guideline (left arrow icon),
or the current guideline's position in the table
of contents (up arrow icon).
The rationale behind the guideline and some
groups of users who benefit from it.
A list of checkpoint definitions.
The checkpoint definitions in
each guideline explain how the guideline applies in typical content
development scenarios. Each checkpoint definition includes:
The checkpoint number.
The statement of the checkpoint.
The priority of the checkpoint. Priority 1
checkpoints are highlighted through the use of style sheets.
Optional informative notes, clarifying examples,
and cross references to related guidelines or checkpoints.
A link to a section of the
Techniques Document ([TECHNIQUES]) where
implementations and examples of the checkpoint are discussed.
Each checkpoint is intended to be
specific enough so that someone reviewing a page
or site may verify that the checkpoint has been satisfied.
A Web content developer must satisfy this
checkpoint. Otherwise, one or more groups will find it impossible to
access information in the document. Satisfying this checkpoint is a
basic requirement for some groups to be able to use Web documents.
A Web content developer should satisfy this
checkpoint. Otherwise, one or more groups will find it difficult to
access information in the document. Satisfying this checkpoint will
remove significant barriers to accessing Web documents.
A Web content developer may
address this checkpoint. Otherwise,
one or more groups will find it somewhat difficult to access
information in the document. Satisfying this checkpoint will improve
access to Web documents.
Some checkpoints specify a priority level that may change under
certain (indicated) conditions.
This section defines three levels of conformance to this
document:
Conformance Level "A":
all Priority 1 checkpoints are satisfied;
Conformance Level "Double-A":
all Priority 1 and 2 checkpoints are satisfied;
Conformance Level "Triple-A":
all Priority 1, 2, and 3 checkpoints are satisfied;
Note. Conformance levels are spelled out in text
so they may be understood when rendered to speech.
Claims of conformance to this document must use one of the
following two forms.
Form 1: Specify:
The guidelines title: "Web Content Accessibility Guidelines 1.0"
The guidelines URI: http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505
The conformance level satisfied: "A", "Double-A", or "Triple-A".
The scope covered by the claim (e.g., page, site, or
defined portion of a site.).
Example of Form 1:
This page conforms to W3C's "Web Content Accessibility Guidelines 1.0",
available at http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505, level Double-A.
Form 2: Include, on each page claiming conformance, one of three
icons provided by W3C and link the icon to the
appropriate W3C explanation of the claim. Information about
the icons and how to insert them in pages is available at
[WCAG-ICONS].
Although some people cannot use images, movies, sounds, applets,
etc. directly, they may still use pages that include equivalent information to the visual
or auditory content. The equivalent information must serve the same
purpose as the visual or auditory content. Thus, a text equivalent for
an image of an upward arrow that links to a table of contents could be
"Go to table of contents". In some cases, an equivalent should also
describe the appearance of visual content (e.g., for complex charts,
billboards, or diagrams) or the sound of auditory content (e.g., for
audio samples used in education).
This guideline emphasizes the importance of providing text equivalents of non-text
content (images, pre-recorded audio, video). The power of text equivalents
lies in their capacity to be rendered in ways that are accessible to
people from various disability groups using a variety of technologies.
Text can be readily output to speech synthesizers and braille
displays, and can be presented visually (in
a variety of sizes) on computer displays
and paper. Synthesized speech is critical for individuals who are
blind and for many people with the reading difficulties that often
accompany cognitive disabilities, learning disabilities, and
deafness. Braille is essential for individuals who are both deaf and
blind, as well as many individuals whose only sensory disability is
blindness. Text displayed visually benefits
users who are deaf as well as the majority of Web users.
Providing non-text equivalents (e.g., pictures, videos, and
pre-recorded audio) of text is also beneficial to some users,
especially nonreaders or people who have difficulty reading. In
movies or visual presentations, visual action such as body language or
other visual cues may not be accompanied by enough audio information
to convey the same information. Unless verbal descriptions of this
visual information are provided, people who cannot see (or look at)
the visual content will not be able to perceive it.
1.1 Provide a text equivalent for every non-text element (e.g., via "alt", "longdesc", or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ascii art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. [Priority�1]
For example, in HTML:
Use "alt" for the IMG, INPUT, and APPLET elements,
or provide a text equivalent
in the content of the OBJECT and APPLET elements.
For complex content (e.g., a chart) where
the "alt" text does not provide a complete text
equivalent, provide an additional
description using, for example, "longdesc" with IMG or FRAME,
a link inside an OBJECT element,
or a description link.
For image maps, either use the "alt" attribute with AREA, or
use the MAP element with A elements (and other text) as content.
1.3Until user agents can automatically read aloud the text equivalent of a visual track, provide an auditory description of the important information of the visual track of a multimedia presentation. [Priority�1]
1.4 For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation. [Priority�1]
1.5Until user agents render text equivalents for client-side image map links, provide redundant text links for each active region of a client-side image map. [Priority�3]
If color alone is used to convey information, people who cannot
differentiate between certain colors and users with devices that have
non-color or non-visual displays will not receive the information.
When foreground and background colors are too close to the same
hue, they may not provide sufficient contrast when viewed using
monochrome displays or by people with different types of color
deficits.
2.2 Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen. [Priority�2 for images, Priority�3 for text].
Using markup improperly -- not according to specification --
hinders accessibility. Misusing markup for a presentation effect
(e.g., using a table for layout or a header to change the font size)
makes it difficult for users with specialized
software to understand the organization of the page or to
navigate through it. Furthermore, using presentation
markup rather than structural markup to
convey structure (e.g., constructing what looks like a table of data
with an HTML PRE element) makes it difficult to render a page
intelligibly to other devices (refer to the description of
difference between content,
structure, and presentation).
Content developers
may be tempted to use (or misuse) constructs that achieve a
desired formatting effect on older browsers. They must be aware that
these practices cause accessibility problems and must consider
whether the formatting effect is so critical as to warrant
making the document inaccessible to some users.
At the other extreme, content developers must not sacrifice
appropriate markup because a certain browser or assistive technology
does not process it correctly. For example, it is appropriate to use
the TABLE element in HTML to mark up
tabular information even
though some older screen readers may not handle side-by-side
text correctly (refer to checkpoint 10.3). Using
TABLE correctly and creating tables that transform gracefully (refer to
guideline 5) makes it possible for software to render tables other
than as two-dimensional grids.
3.1 When an appropriate markup language exists, use markup rather than images to convey information. [Priority�2]
For example, use MathML to mark up mathematical
equations, and
style sheets to format text and control layout. Also,
avoid using images to represent text -- use text and style sheets
instead.
Refer also to guideline 6 and guideline 11.
3.4 Use relative rather than absolute units in markup language attribute values and style sheet property values. [Priority�2]
For example, in CSS, use 'em' or percentage
lengths rather than 'pt' or 'cm', which are absolute
units. If absolute units are used,
validate that the rendered
content is usable (refer to the
section on validation).
When content developers mark up natural language changes in a
document, speech synthesizers and braille devices can automatically
switch to the new language, making the document more accessible
to multilingual users.
Content developers should identify the predominant natural language of a document's
content (through markup or
HTTP headers). Content developers should
also provide expansions of abbreviations and acronyms.
In addition to helping assistive technologies, natural language
markup allows search engines to find key words and identify documents
in a desired language. Natural language markup also improves
readability of the Web for all people, including those with learning
disabilities, cognitive disabilities, or people who are deaf.
When abbreviations and natural language changes are not identified,
they may be indecipherable when machine-spoken or brailled.
4.2 Specify the expansion of each abbreviation or acronym in a document where it first occurs. [Priority�3]
For example, in HTML,
use the "title" attribute of the ABBR and ACRONYM elements.
Providing the expansion in the main body of the document
also helps document usability.
4.3 Identify the primary natural language of a document. [Priority�3]
For example, in HTML set the "lang" attribute on the
HTML element. In XML, use "xml:lang".
Server operators should configure servers to
take advantage of HTTP content negotiation mechanisms
([RFC2068], section 14.13) so
that clients can automatically retrieve
documents of the preferred language.
Tables should be used to mark up truly
tabular information ("data
tables"). Content developers should avoid using them to lay out pages
("layout tables"). Tables for any use also present special problems
to users of screen
readers (refer to checkpoint 10.3).
Some user agents allow users to navigate
among table cells and access header and other table cell
information. Unless marked-up properly, these tables
will not provide user agents with the appropriate information.
(Refer also to guideline 3.)
The following checkpoints will directly benefit people who access a
table through auditory means (e.g., a screen reader or an
automobile-based personal computer) or who view only a
portion of the page at a time (e.g., users with blindness or low
vision using speech output or a braille display, or other users of
devices with small displays, etc.).
5.2 For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells. [Priority�1]
For example, in HTML, use THEAD, TFOOT, and TBODY to group
rows, COL and COLGROUP to group columns, and
the "axis", "scope", and "headers" attributes, to describe
more complex relationships among data.
5.3 Do not use tables for layout unless the table makes sense when linearized. Otherwise, if the table does not make sense, provide an alternative equivalent (which may be a linearized version). [Priority�2]
Although content developers
are encouraged to use new technologies that
solve problems raised by existing technologies, they should know how
to make their pages still work with older browsers
and people who choose to turn off features.
6.1 Organize documents so they may be read without style sheets. For example, when an HTML document is rendered without associated style sheets, it must still be possible to read the document. [Priority�1]
When content is organized logically, it will be
rendered in a meaningful order when style sheets
are turned off or not supported.
6.3 Ensure that pages are usable when scripts, applets, or other programmatic objects are turned off or not supported. If this is not possible, provide equivalent information on an alternative accessible page. [Priority�1]
For example, ensure that links that trigger scripts work
when scripts are turned off or not supported (e.g.,
do not use "javascript:" as the link target).
If it is not possible to make the
page usable without scripts, provide a text equivalent with the
NOSCRIPT element, or use a server-side script instead of a client-side
script, or provide an alternative accessible page as per
checkpoint 11.4. Refer also to guideline 1.
6.5 Ensure that dynamic content is accessible or provide an alternative presentation or page. [Priority�2]
For example, in HTML, use
NOFRAMES at the end of each frameset. For some applications,
server-side scripts may be more accessible than
client-side scripts.
Some people with cognitive or
visual disabilities are unable to read moving text
quickly enough or at all. Movement can also cause such a distraction
that the rest of the page becomes unreadable for people with
cognitive disabilities. Screen
readers are unable to read moving text. People with
physical disabilities might not be able to move
quickly or accurately enough to interact with moving objects.
Note. All of the following checkpoints involve
some content developer responsibility until user
agents provide adequate feature control mechanisms.
7.1Until user agents allow users to control flickering, avoid causing the screen to flicker. [Priority�1]
Note. People
with photosensitive epilepsy can have seizures triggered by
flickering or flashing in the 4 to 59 flashes per second (Hertz) range
with a peak sensitivity at 20 flashes per second as well as quick
changes from dark to light (like strobe lights).
7.2Until user agents allow users to control blinking, avoid causing content to blink (i.e., change presentation at a regular rate, such as turning on and off). [Priority�2]
7.3Until user agents allow users to freeze moving content, avoid movement in pages. [Priority�2]
When a page includes moving content,
provide a mechanism within a script
or applet to allow users to freeze motion or updates.
Using style sheets with scripting to create
movement allows users to turn off
or override the effect more easily.
Refer also to guideline 8.
7.5Until user agents provide the ability to stop auto-redirect, do not use markup to redirect pages automatically. Instead, configure the server to perform redirects. [Priority�2]
When an embedded object has its "own interface", the interface --
like the interface to the browser itself -- must be accessible. If the
interface of the embedded object cannot be made accessible, an
alternative accessible solution must be provided.
Note. For information about accessible interfaces,
please consult the User Agent Accessibility Guidelines ([WAI-USERAGENT]) and the Authoring Tool
Accessibility Guidelines ([WAI-AUTOOL]).
8.1 Make programmatic elements such as scripts and applets directly accessible or compatible with assistive technologies [Priority�1 if functionality is important and not presented elsewhere, otherwise Priority�2.]
Device-independent access
means that the user may interact with the user agent or document
with a preferred input (or output) device -- mouse, keyboard, voice,
head wand, or other. If, for example, a form
control can only be activated with a mouse or other pointing device,
someone who is using the page without sight, with voice input, or with
a keyboard or who is using some other non-pointing
input device will not be able to use the form.
Note. Providing text equivalents for image maps or
images used as links makes it possible for users to interact with them
without a pointing device. Refer also to guideline 1.
Generally, pages that allow keyboard interaction are
also accessible through speech input or a command line interface.
9.1 Provide client-side image maps instead of server-side image maps except where the regions cannot be defined with an available geometric shape. [Priority�1]
9.5 Provide keyboard shortcuts to important links (including those in client-side image maps), form controls, and groups of form controls. [Priority�3]
For example, in HTML, specify shortcuts
via the "accesskey" attribute.
For example, older browsers do not allow users to
navigate to empty edit boxes. Older screen readers read lists of
consecutive links as one link. These active elements are therefore
difficult or impossible to access. Also, changing the current window
or popping up new windows can be very disorienting to users
who cannot see that this has happened.
Note. The following checkpoints apply until user agents
(including assistive
technologies) address these issues. These checkpoints are
classified as "interim", meaning that the Web Content Guidelines
Working Group considers them to be valid and necessary to Web
accessibility as of the publication of this
document. However, the Working Group does not expect these
checkpoints to be necessary in the future, once Web technologies have
incorporated anticipated features or capabilities.
10.1Until user agents allow users to turn off spawned windows, do not cause pop-ups or other windows to appear and do not change the current window without informing the user. [Priority�2]
For example, in HTML, avoid using a frame whose target
is a new window.
10.2Until user agents support explicit associations between labels and form controls, for all form controls with implicitly associated labels, ensure that the label is properly positioned. [Priority�2]
The label must immediately precede
its control on the same line (allowing more than one
control/label per line) or be in the line preceding the
control (with only one label and one control per line).
Refer also to checkpoint 12.4.
10.3Until user agents (including assistive technologies) render side-by-side text correctly, provide a linear text alternative (on the current page or some other) for all tables that lay out text in parallel, word-wrapped columns. [Priority�3]
Note.
Please consult the definition of linearized table.
This checkpoint benefits people with
user agents
(such as some screen readers)
that are unable to handle blocks of text presented side-by-side;
the checkpoint should not discourage
content developers from using tables
to represent tabular information.
The current guidelines recommend W3C technologies (e.g., HTML,
CSS, etc.) for several reasons:
W3C technologies include "built-in" accessibility features.
W3C specifications undergo early review to ensure that accessibility
issues are considered during the design phase.
W3C specifications are developed in an open, industry consensus process.
Many non-W3C formats (e.g., PDF, Shockwave, etc.) require viewing
with either plug-ins or stand-alone applications. Often, these
formats cannot be viewed or navigated with standard
user agents (including assistive
technologies). Avoiding non-W3C and non-standard features
(proprietary elements, attributes, properties, and extensions) will
tend to make pages more accessible to more people using a wider
variety of hardware and software. When inaccessible
technologies (proprietary or not)
must be used, equivalent accessible pages must be provided.
Even when W3C technologies are used, they must be used in
accordance with accessibility guidelines. When using new technologies,
ensure that they transform gracefully (Refer also to guideline 6.).
Note. Converting documents (from PDF, PostScript,
RTF, etc.) to W3C markup languages (HTML,
XML) does not always create
an accessible document. Therefore, validate each page for
accessibility and usability
after the conversion process (refer to
the section on validation).
If a page does not readily
convert, either revise the page until its original
representation converts appropriately or provide an HTML or plain text
version.
11.1 Use W3C technologies when they are available and appropriate for a task and use the latest versions when supported. [Priority�2]
Refer to the list of references
for information about where to find the latest W3C specifications
and [WAI-UA-SUPPORT]
for information about user agent support for W3C technologies.
11.4 If, after best efforts, you cannot create an accessible page, provide a link to an alternative page that uses W3C technologies, is accessible, has equivalent information (or functionality), and is updated as often as the inaccessible (original) page. [Priority�1]
Note.
Content developers should only resort to alternative
pages when other solutions fail because alternative pages
are generally updated less often than "primary" pages. An out-of-date
page may be as frustrating as one that is inaccessible since, in both
cases, the information presented on the original page is
unavailable. Automatically generating alternative pages may lead to
more frequent updates, but content developers
must still be careful to ensure
that generated pages always make sense, and that users are able to
navigate a site by following links on primary pages, alternative
pages, or both. Before resorting to an alternative page,
reconsider the design of the original page; making it
accessible is likely to improve it for all users.
Grouping elements and providing contextual information about the
relationships between elements can be useful for all
users. Complex relationships between parts of a page may
be difficult for people with cognitive disabilities
and people with visual disabilities to interpret.
12.3 Divide large blocks of information into more manageable groups where natural and appropriate. [Priority�2]
For example, in HTML, use OPTGROUP
to group OPTION elements inside a SELECT;
group form controls with FIELDSET and LEGEND;
use nested lists where appropriate;
use headings to structure documents, etc.
Refer also to guideline 3.
13.1 Clearly identify the target of each link. [Priority�2]
Link
text should be meaningful enough
to make sense when read out of context --
either on its own or as part of a sequence of links.
Link text should also be terse.
For example, in HTML, write "Information about
version 4.3" instead of "click here".
In addition to clear link text, content developers may
further clarify the target of a link
with an informative link title
(e.g., in HTML, the "title" attribute).
13.2 Provide metadata to add semantic information to pages and sites. [Priority�2]
For example, use
RDF
([RDF])
to indicate the document's author,
the type of content, etc.
Note. Some HTML user agents
can build navigation tools from document relations described by
the HTML LINK element and "rel" or "rev" attributes
(e.g., rel="next", rel="previous", rel="index", etc.).
Refer also to checkpoint 13.5.
13.8 Place distinguishing information at the beginning of headings, paragraphs, lists, etc. [Priority�3]
Note.
This is commonly referred to as "front-loading" and is
especially helpful for people accessing information with serial
devices such as speech synthesizers.
13.9 Provide information about document collections (i.e., documents comprising multiple pages.). [Priority�3]
For example, in HTML
specify document collections with the LINK
element and the "rel" and "rev" attributes.
Another way to create a collection is by building
an archive (e.g., with zip, tar and gzip, stuffit, etc.)
of the multiple pages.
Note.
The performance improvement gained by offline
processing can make browsing much less
expensive for people with disabilities who may
be browsing slowly.
Consistent page layout, recognizable graphics, and easy to understand
language benefit all users. In particular, they help
people with cognitive disabilities or who
have difficulty reading. (However, ensure that images
have text equivalents for people who are blind, have
low vision, or for any user who cannot or has
chosen not to view graphics. Refer also to guideline 1.)
Using clear and simple language promotes effective
communication. Access to written information can be difficult for
people who have cognitive or learning disabilities. Using clear and
simple language also benefits people whose first language differs from
your own, including those people who communicate primarily in sign
language.
Validate accessibility with automatic tools and human
review. Automated methods are generally rapid and convenient but cannot
identify all accessibility issues. Human review can help ensure
clarity of language and ease of navigation.
Begin using validation methods at the earliest stages of
development. Accessibility issues identified early are easier to
correct and avoid.
Use an automated accessibility tool and browser validation tool.
Please note that software tools do not address all accessibility
issues, such as the meaningfulness of link text, the applicability of
a text equivalent, etc.
Validate syntax (e.g., HTML, XML, etc.).
Validate style sheets (e.g., CSS).
Use a text-only browser or emulator.
Use multiple graphic browsers, with:
sounds and graphics loaded,
graphics not loaded,
sounds not loaded,
no mouse,
frames, scripts, style sheets, and applets not loaded
Use several browsers, old and new.
Use a self-voicing
browser, a screen reader, magnification software, a small display,
etc.
Use spell and grammar checkers. A person reading a page with a
speech synthesizer may not be able to decipher the synthesizer's best
guess for a word with a spelling error. Eliminating grammar problems
increases comprehension.
Review the document for clarity and simplicity.
Readability statistics, such as those generated by some word
processors may be useful indicators of clarity and simplicity. Better
still, ask an experienced (human) editor to review written content for
clarity. Editors can also improve the usability of documents by
identifying potentially sensitive cultural issues
that might arise due to language or icon usage.
Invite people with disabilities to review documents. Expert
and novice users with disabilities will provide valuable feedback
about accessibility or usability problems and their severity.
Software or hardware that has been specifically designed to assist
people with disabilities in carrying out daily activities. Assistive
technology includes wheelchairs, reading machines, devices for
grasping, etc. In the area of Web Accessibility, common
software-based assistive technologies include screen readers, screen
magnifiers, speech synthesizers, and voice input software that operate
in conjunction with graphical desktop browsers (among other user
agents). Hardware assistive technologies include
alternative keyboards and pointing devices.
ASCII art refers to text characters and symbols
that are combined to create an image. For example ";-)" is the smiley emoticon. The following is
an ascii figure showing the relationship between flash frequency and
photoconvulsive response in patients with
eyes open and closed [skip
over ascii figure or
consult a description
of chart]:
HTML editors, document conversion tools, tools that
generate Web content from databases are all
authoring tools. Refer to the
"Authoring Tool Accessibility Guidelines" ([WAI-AUTOOLS]) for information about developing
accessible tools.
Braille uses six raised dots in different patterns to represent
letters and numbers to be read by people who are blind with their
fingertips. The word "Accessible" in braille follows:
A braille display,
commonly referred to as a "dynamic braille display," raises or lowers
dot patterns on command from an electronic device, usually a
computer. The result is a line of braille that can change from moment
to moment. Current dynamic braille displays range in size from one cell (six
or eight dots) to an eighty-cell line, most having
between twelve and twenty cells per line.
A deprecated element or attribute is one that has been outdated by
newer constructs. Deprecated elements may become
obsolete in future versions of HTML. The
index of HTML elements and attributes in
the Techniques Document indicates which elements
and attributes are deprecated in HTML 4.0.
Authors should avoid using deprecated elements and attributes.
User agents should continue to support for
reasons of backward compatibility.
Users must be able to interact with a user agent
(and the document it renders) using the supported input and output
devices of their choice and according to their needs.
Input devices may include pointing devices,
keyboards, braille devices, head wands, microphones, and
others. Output devices may include monitors, speech synthesizers, and
braille devices.
Please note that "device-independent support" does not mean that
user agents must support every input or output device. User agents
should offer redundant input and output mechanisms for those devices
that are supported. For example, if a user agent supports keyboard
and mouse input, users should be able to interact with all features
using either the keyboard or the mouse.
The content of a document refers to
what it says to the user through natural language,
images, sounds, movies, animations, etc.
The structure of a document is how it is
organized logically (e.g., by chapter, with an
introduction and table of contents, etc.).
An element
(e.g., P, STRONG, BLOCKQUOTE in HTML) that specifies document
structure is called a structural
element. The presentation of a
document is how the document is rendered (e.g., as print, as a
two-dimensional graphical presentation, as an text-only presentation,
as synthesized speech, as braille, etc.)
An element
that specifies document presentation (e.g., B, FONT, CENTER)
is called a presentation
element.
Consider a document header, for example.
The content of the header is what
the header says (e.g., "Sailboats"). In HTML,
the header is a structural element
marked up with, for example, an H2 element.
Finally, the presentation of the header might be
a bold block text in the margin, a centered line of text,
a title spoken with a certain voice style (like an aural
font), etc.
DHTML is the marketing term applied to
a mixture of standards including HTML,
style sheets, the
Document Object Model [DOM1]
and scripting.
However, there is no W3C specification that formally defines
DHTML. Most guidelines may be applicable
to applications using DHTML, however the following guidelines focus on
issues related to scripting and style sheets: guideline 1,
guideline 3, guideline 6,
guideline 7, and guideline 9.
This document uses the term "element" both
in the strict SGML sense (an element is a syntactic construct) and
more generally to mean a type
of content (such as video or sound) or a logical
construct (such as a header or list). The second sense
emphasizes that a guideline inspired by HTML
could easily apply to another markup language.
Note that some (SGML) elements have content that is rendered
(e.g., the P, LI, or TABLE elements in HTML), some are replaced
by external content (e.g., IMG), and some affect
processing (e.g., STYLE and SCRIPT cause information
to be processed by a style sheet or script engine).
An element that causes text characters
to be part of the document is called a
text element.
Content is "equivalent" to other content when both fulfill essentially
the same function or purpose upon presentation to the user.
In the context of
this document, the equivalent must fulfill essentially the same function
for the person with a disability (at least insofar as is feasible, given
the nature of the disability and the state of technology), as the primary
content does for the person without any disability.
For example, the text "The Full Moon" might convey the same information
as an image of a full moon when presented to users. Note that
equivalent information focuses on fulfilling
the same function. If the image is
part of a link and understanding the image is crucial
to guessing the link target, an
equivalent must also give users an idea of the link target.
Providing equivalent information for inaccessible
content is one of the primary ways
authors can make their documents accessible to people with
disabilities.
As part of fulfilling the same function of content
an equivalent may involve a description of that content
(i.e., what the content looks like or sounds like).
For example, in order for users to understand
the information conveyed by a complex chart,
authors should describe the visual information in the chart.
Since text content can be presented to the user as synthesized
speech, braille, and visually-displayed text, these guidelines
require text
equivalents for graphic and audio information.
Text equivalents must be
written so that they convey all essential content.
Non-text
equivalents (e.g., an
auditory description of a visual presentation, a video of a person
telling a story using sign language as an
equivalent for a written story, etc.)
also improve accessibility for
people who cannot access visual information
or written text, including many individuals with blindness,
cognitive disabilities, learning disabilities, and deafness.
Equivalent information may be provided in a number of ways,
including through attributes (e.g., a text value for the "alt"
attribute in HTML and SMIL), as part of element content (e.g., the
OBJECT in HTML), as part of the document's prose, or via a linked
document (e.g., designated by the "longdesc" attribute in HTML or a description link). Depending on the complexity of
the equivalent, it may be necessary to combine techniques (e.g., use
"alt" for an abbreviated equivalent, useful to familiar readers,
in addition to "longdesc" for a link to more complete information,
useful to first-time readers).
The details of how and when to provide equivalent information
are part of the Techniques Document
([TECHNIQUES]).
A text transcript
is a text equivalent of audio information that includes spoken
words and non-spoken sounds such as sound effects.
A caption is a text
transcript for the audio track of a video presentation that is
synchronized with the video and audio tracks. Captions are generally
rendered visually by being superimposed over the video,
which benefits people who are deaf and
hard-of-hearing, and anyone who cannot hear the audio (e.g.,
when in a crowded room).
A collated
text transcript combines (collates) captions
with text descriptions of video information
(descriptions of the actions, body language, graphics, and scene
changes of the video track). These text equivalents make
presentations accessible to people who are deaf-blind and to people who
cannot play movies, animations, etc. It also makes the information
available to search engines.
One example of a non-text equivalent is an auditory description
of the key visual elements of a presentation.
The description is either a
prerecorded human voice or a synthesized voice (recorded or generated
on the fly). The auditory description
is synchronized with the audio track of the presentation, usually
during natural pauses in the audio track.
Auditory descriptions include information about actions,
body language, graphics, and scene changes.
Content developers
can make client-side image maps accessible by providing
device-independent access to the same links associated with the image
map's regions. Client-side image maps allow the user agent to provide
immediate feedback as to whether or not the user's pointer is over an
active region.
A table rendering process where the contents of
the cells become a series of paragraphs (e.g., down
the page) one after another. The paragraphs will occur in the same
order as the cells are defined in the document source.
Cells should make sense when read in order and should
include structural
elements (that create paragraphs, headers, lists, etc.)
so the page makes sense after linearization.
Spoken, written, or signed human languages such as French,
Japanese, American Sign Language, and braille.
The natural language of content may
be indicated with the "lang" attribute in HTML ([HTML40], section 8.1)
and the "xml:lang"
attribute in XML ([XML], section 2.12).
A PDA is a small,
portable computing device. Most PDAs are used to track personal data
such as calendars, contacts, and electronic mail. A PDA is generally a
handheld device with a small screen that allows input from various
sources.
A software program that magnifies a portion of the screen, so
that it can be more easily viewed. Screen magnifiers
are used primarily by individuals
with low vision.
A software program that reads the contents of the screen aloud
to a user. Screen readers are used primarily
by individuals who are blind. Screen
readers can usually only read text that is printed, not painted, to
the screen.
A style sheet is a set of statements that specify presentation of
a document. Style sheets may have three different origins: they
may be written by content providers, created by users, or
built into user agents. In CSS ([CSS2]),
the interaction of content provider, user, and user agent
style sheets is called the cascade.
Presentation markup
is markup that achieves a stylistic (rather than structuring) effect
such as the B or I elements in HTML. Note that the STRONG and EM
elements are not considered presentation markup since they convey
information that is independent of a particular font style.
When tables are used to represent logical relationships among
data -- text, numbers, images, etc., that information is
called "tabular information" and the tables are
called "data tables". The relationships expressed by a table
may be rendered visually (usually on a two-dimensional grid),
aurally (often preceding cells with header information),
or in other formats.
In most of the checkpoints, content developers are asked to
ensure the accessibility of their pages and sites. However, there are
accessibility needs that would be more appropriately met by
user
agents (including
assistive technologies).
As of the publication of
this document, not all user agents or
assistive technologies provide the accessibility control users require
(e.g., some user agents may not allow users to turn off blinking
content, or some screen readers may not handle tables well).
Checkpoints that contain the phrase "until user agents ..." require
content developers to provide additional support for accessibility
until most user agents readily available to their
audience include the necessary accessibility features.
Note. The W3C WAI Web site
(refer to [WAI-UA-SUPPORT])
provides information
about user agent support for accessibility features.
Content developers are encouraged to consult this page
regularly for updated information.
Software to access Web
content, including desktop graphical browsers, text browsers, voice
browsers, mobile phones, multimedia players, plug-ins, and some
software assistive technologies used in conjunction with browsers such as
screen readers, screen magnifiers, and voice recognition software.
We wish to thank the following people who have contributed their
time and valuable comments to shaping these guidelines:
Harvey Bingham, Kevin Carey, Chetz Colwell, Neal Ewers, Geoff
Freed, Al Gilman, Larry Goldberg, Jon Gunderson, Eric Hansen,
Phill Jenkins, Leonard
Kasday, George Kerscher, Marja-Riitta Koivunen, Josh Krieger, Scott
Luebking, William Loughborough, Murray Maloney, Charles
McCathieNevile, MegaZone (Livingston Enterprises), Masafumi Nakane,
Mark Novak, Charles Oppermann, Mike Paciello, David Pawson, Michael Pieper,
Greg Rosmaita, Liam Quinn, Dave Raggett, T.V. Raman, Robert
Savellis, Jutta Treviranus, Steve Tyler, Jaap van Lelieveld, and
Jason White
The original draft of this document is based on "The Unified Web
Site Accessibility Guidelines" ([UWSAG])
compiled by the Trace R & D Center at the University of Wisconsin.
That document includes a list of additional contributors.
"Document Object Model (DOM) Level 1 Specification",
V. Apparao, S. Byrne, M. Champion, S. Isaacs, I. Jacobs, A. Le Hors, G. Nicol,
J. Robie, R. Sutor, C. Wilson, and L. Wood, eds., 1 October 1998.
The DOM Level 1 Recommendation is:
http://www.w3.org/TR/1998/REC-DOM-Level-1-19981001.
The latest version of DOM Level 1 is available at:
http://www.w3.org/TR/REC-DOM-Level-1
"PNG (Portable Network Graphics) Specification", T. Boutell, ed.,
T. Lane, contributing ed., 1 October 1996.
The latest version of PNG 1.0 is:
http://www.w3.org/TR/REC-png.
"Techniques for Web Content Accessibility Guidelines 1.0", W. Chisholm, G. Vanderheiden, I. Jacobs, eds.
This document explains how to implement the checkpoints defined
in "Web Content Accessibility Guidelines 1.0". The latest draft of the techniques is
available at:
http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/
"Authoring Tool Accessibility Guidelines", J. Treviranus,
J. Richards, I. Jacobs, C. McCathieNevile, eds.
The latest Working Draft of these guidelines for designing
accessible authoring tools is available at:
http://www.w3.org/TR/WAI-AUTOOLS/
This page documents known support by user agents (including
assistive technologies) of some accessibility features listed
in this document. The page is available at:
http://www.w3.org/WAI/Resources/WAI-UA-Support
"User Agent Accessibility Guidelines", J. Gunderson
and I. Jacobs, eds.
The latest Working Draft of these guidelines for designing accessible
user agents is available at:
http://www.w3.org/TR/WAI-USERAGENT/
"The Unified Web Site Accessibility Guidelines", G.
Vanderheiden, W. Chisholm, eds.
The Unified Web Site Guidelines were compiled by the
Trace R
& D Center at the University of Wisconsin under funding from the
National Institute on Disability and Rehabilitation Research
(NIDRR),� U.S. Dept. of Education. This document is
available at:
http://www.tracecenter.org/docs/html_guidelines/version8.htm