See also: IRC log
<trackbot> Date: 21 January 2014
<adam_solomon> i just got on
<adam_solomon> im having communication problems but i am the ipcaller.a
<Loretta> Scribe:Loretta
AWK: we had some discussion last week, but needed to talk more about scoring metrics.
<shadi> http://www.w3.org/WAI/ER/conformance/comments-20131129
Shadi: See link to disposition of
comments from WCAG WG on the previous draft.
... Most comments we wrote resolutions to; people have been
asked to review the disposition of their comments.
... scoring metrics is still an open issue. It has been an
issue since the beginning of this work.
... There have been repeated requests for some way to simplify
the status of how well a web site conforms to WCAG.
... Other people have been opposed to this for a variety of
reasons.
... Some of the ideas: we are not doing any kind of
sophisticated scoring, just computing a ratio.
... it does give some indication for the web site.
... Possibly we should rename this (aggregation?) rather than
calling it scoring, since that raises expecttions that aren't
being met.
... We would like to keep it in for the coming draft, to get
more feedback and also to try it out in practice.
... We hope this will be the last draft; if we remove it now,
we lose any opportunity for more public input.
<Joshue> +q
AWK: suggestion call this issue out as one for which we are especially soliciting comments.
Shadi: That is the idea. Publish with an editor's note asking for input.
Josh: That is a good way to deal with this. Publish as is, aggregate feedback, and make a final decision.
David: I'm attracted to the
scoring idea. Gov of Canada is asking for something like
this.
... Because they had no metrics, they lost that part of the
judgment in court.
... personnaly I'm sceptical we can find a good metric.
... THe problem with the current scheme is that 1.3.1 really
covers about 50% of the accessibility issues. But failing 1.3.1
only has a small impact on the score.
... I am afraid that people want this so badly that whatever we
call it, it will become the score in the field.
<shadi> http://www.w3.org/TR/accessibility-metrics-report/
Shadi: IN the Research and
Development WG, we tried to find a more sophisticated
metric.
... We did find what David described. To date, we aren't aware
of a metric that meets the requirements we were looking
for
... One thought: you don't only count the SC across the
website, but each page gets a score. So repeatedly failing one
SC will have a larger impact on the metric.
... We added lots of warnings about how this is not appropriate
for comparing the accessibility of different web sites. We can
clairy that language even more if this stays.
David: I think that is the best we can do.
AWK: proposal is to publish this version as an updated Editor's Draft with an editor's note asking for feedback on the scoring metric.
<shadi> [[Eval Task Force specifically asks feedback on this section. Please indicate if the score provided now is useful for you or if possible provide input for improving the concept of a score in this evaluation methodology.]]
<Joshue> +q
Josh: is the question whether there will be a scoring mechanism, or whether this is the right scoring mechanism?
Shadi: one of the suggestions is
to completely remove the section about the scoring
methodology..
... Are you asking whether this suggestion should be in the
note?
Josh: evaluation and metric go
hand-in-hand. SOme kind of metric is needed.
... But better to get this out for comments, and then we can
come back and thrash it out.
Shadi: Eval TF is not committed
to having a scoring metric. The TF is split on this
issue.
... In the previous draft we also asked for input about
scoring, and the comments were also half and half.
... This time we are tweaking the question to ask how to
improve the scoring.
David: has anyone surveyed the scoring performed by the major evaluatinon tools?
Katie: I have. They weigh it with other factors, like how often something appears on a page.
<Joshue> +q
David: Has anyone considered this type of approach, looking to see if there is any commonality among the tools. Is a pattern starting to emerge?
Katie: one of the things my org includes in that algorthm is how many instances per page, and also rate some things as more important than others, and they also take remediation cost into account.
James: I also assume that their algorithms are proprietary.
Josh: nyone's algorithm will be
weighted by their own experiences and biases.
... and there is a disconnect between computed scores and the
results of user testing.
<shadi> http://www.w3.org/TR/accessibility-metrics-report/
Shadi: I refer again to the
research report on web accessibility metrics. It does look at a
fair number of approaches to such metrics.
... conceptually a lot of the research in this report looks at
the different tools and their approaches.
... you start getting into validity issues, where they consider
what they think is more important. Is that a measure of WCAG
conformance?
Katie: it is not.
Shadi: we also get into issues of
complexity. Those tools tend to measure the failures more, or
what they can automatically check.
... we might end up dropping the type of scoring, but the
current method is fairly simple to compute.
... it is an optional part of the methodology.
<Joshue> LGR: We seem to be diving into the score - I don't think we'll get through that today. Lets stick on whether we should publish.
Loretta: we seem to be diving into details of how to compute score. I don't think the WG will settle this today. Can we decide whether to publish with the note.
<Joshue> lets do it
AWK: are there any other issues?
Resolution: Approve publication of an updated editor's draft of the Eval note with an Editor's note asking for feedback on scoring.
Shadi: We still need to generate an updated draft. Expecting to publish by the end of next week.
<Joshue> http://www.w3.org/TR/UAAG20/
<Joshue> http://www.w3.org/TR/IMPLEMENTING-UAAG20/
AWK: UAAG group requests that we review the last call working draft of UAAG20. The comment period has been extended to Jan 31.
<Joshue> Please send comments [email protected]
AWK: Are there WG members who
would be particularly interested in doing this?
... do we ask individuals to review, or do we try to develop a
consensus group review?
Michae: it is our call how to handle this.
AWK: if individuals see particular red flags, please brings those issues back to the WG for discussion.
David: WHen we make statements as a WG< they are usualy heeded. WHen individuals submit, it doesn't carry the same weight.
AWK: This isn't surprising.
... If there are things that individuals feel strongly about
and this the rest of the WG would as well, we could put those
on a survey. But there is not much time.
<Joshue> +q
AWK: We would need to identify such things by Friday.
Josh: has anyone looked at this
draft?
... maybe people with familiarity with UUAG would skim?
Michael: the point of a WCAG review is to make sure there are no incompatabilities with WCAG.
(THose with familiarity have no time...)
<Joshue> I'll certainly have a look.
AWK: people are encouraged to do what they can and comment to the UAAG group by Jan 31.
AWK: Request for review of our
latest draft went out to mailing lists, twitter, etc.
... Sources are hosted on github. anyone can access those
files, make edits in their local repository and send a pull
request to apply their edits.
<AWK> http://www.w3.org/WAI/WCAG20/comments/
AWK: IN our comment instructions,
there is info about how to use github to comment on WCAG.
... You still need to provide comments on the rationale for a
change, but they can be very clear and explicity about what
changes they would like to see.
... If someone submits a comment via github, either minor to
fix a spelling request or bad link, or more substantial like
rewriting a paragraph, they submit the pull request.
... That comes to us as an email, and we will log it (as usual)
in the comment tracker.
AW: We still bring that to the WG for discussion. Depending on the outcome, we might merge it in (which sends an email to the submitter). We will also send a response to the public list, as always.
David: so anyone can file an issue?
AWK: Yes.
<Joshue> +q
David: from the low tech end, just clicking the issue button is easy.
Da vid: people can say anything generally. No need to write code, etc.
AWK: I should look into that feature. The solution may not be known to the commenter, for instance. That may be a good way to expand our comment process.
Sailesh: Will github scare off commenters who aren't so technical WIll it make them not submit their comments?
AWK: If that were the only way to submit comments, it probably would. But it is not the only way. There is still the online form and email comment submission.
David: the editors may find the github issue tracker a better way to track our bugs.
<Ryladog> * testing note
AWK: Timeline: coments are due on
Feb 14. We will process comments as we get them and finish
accepting comments on March 4. This is a pretty rapid timeline.
It gives us 2 and 1/2 weeks to address all the comments we
have.
... We will have more time if people submit comments early. We
have a couple of editorial comments that have already come in
and will just be handled by the editors.
... Goal is to have a final version that could be approved for
publication on March 11, to be published on March 13.
<Joshue> LGR: It's an ambitious timeline!
AWK: CSUN - we are targeting Tuesday for a Face to Face. The only thing that might change it is if some magic benefactor would host us, but couldn't do it on Tuesday.
<Ryladog> URL is dead
<Joshue> https://www.w3.org/2002/09/wbs/35422/ARIA_tech_Jan21_2014/
Sailesh: I had concerns on the
old version that I commented on when we reviewed it.
... if you look at the bottom section, the rationale for the
writeup.
<Ryladog> * katie thanks Josh
Sailesh This could also be used for 3.3.3. That is missing.
Sailesh: IN the current example,
all errors go into a p tag. They should be marked up as a
list.
... Just putting them in a paragraph is not good
practice.
... there is a nuisance between using role=alert and
arialive=assertive
They could be crafted as separate techiques or combined into one
Sailesh: The current technique
test is focused on the details of coding. Would such a test
procedure endure? How can we make it robust?
... Re Kathys comment that example 1 doesn't work in IE, I
tested it in Firefox, where it works.
... What makes it work in IE is to set focus on it.
<AWK> Loretta: biggest concern is direction of test procedure
<Joshue> LGR: My concern with the test direction is that it makes more sense to have a technique that highlights the distinction.
<Joshue> LGR: I don't know what is the diff between various UA a11y support
<Joshue> LGR: It's complicated.
<Joshue> AWK: There is the issue of how we deal with UA support - it's common to many places.
<Joshue> AWK: Kathy agrees with Loretta.
<Ryladog> */ trying to figure out how to author a *silent* comment
<Joshue> AWK: Discussion on tech support
<Joshue> AWK: We can check that out - Adams made some editorial comments.
<Joshue> AS: Explains his comment
<Joshue> SP: Elements without content are not exposed by NVDA or JAWS.
<Joshue> AS: What if alert element has no content?
<Joshue> SP: Not a problem.
<Joshue> SP: Only when populated with an error message, is it read out.
<Joshue> <discussion continues>
<Joshue> AS: If it has the CSS declaration display:none will the AT pick it up?
<Joshue> JN: No.
<Joshue> SP: You don't need it.
<Joshue> AS: Will that be a problem, if so it should be avoided.
<Joshue> JN: That shouldn't be a problem.
<Joshue> AS: Ok.
<Joshue> <discussion continues>
<Joshue> JN: It must be present in the DOM on page load - not via the A11y API. This is why they work in IE in pre ARIA versions, as they bypass the A11y API.
<Joshue> AS: Another comment, the Live Region would pick up on a change in content, if I toggle display:none, would that be enough to trigger the Live Region?
<Joshue> That would be cool.
<MichaelC> I actually donĀ“t know...
<Joshue> <discussion continues>
I'm going to have to leave promptly too.
Outstanding issues: test procedure, and complete examples?
Sailesh: 2 techniques or 1 technique?
David: we've been looking ta these error messages and how to make them the most accessible.
Out of time: we will leave this issue open for more discussion.
<AWK> Scribe: AWK
RESOLUTION: Leave open
This is scribe.perl Revision: 1.138 of Date: 2013-04-25 13:59:11 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/take/count/ Succeeded: s/publlish/publish/ Succeeded: s/ANyone/anyone/ Found Scribe: Loretta Inferring ScribeNick: Loretta Found Scribe: AWK Inferring ScribeNick: AWK Scribes: Loretta, AWK ScribeNicks: Loretta, AWK Default Present: Joshue, Kathleen, Michael_Cooper, Cooper, Loretta, David_MacDonald, Shadi, AWK, +1.703.825.aaaa, Sailesh, adam_solomon, wuwei, Marc_Johlic, +1.301.367.aabb, Kathy_Wahlbin, Katie_Haritos-Shea, James_Nurthen, [IPcaller] Present: Joshue Kathleen Michael_Cooper Cooper Loretta David_MacDonald Shadi AWK +1.703.825.aaaa Sailesh adam_solomon wuwei Marc_Johlic +1.301.367.aabb Kathy_Wahlbin Katie_Haritos-Shea James_Nurthen [IPcaller] Found Date: 21 Jan 2014 Guessing minutes URL: http://www.w3.org/2014/01/21-wai-wcag-minutes.html People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]