<laura> Scribe: Laura
<scribe> Scribe: laura
RM: Any new members?
(none)
RM: Any new topics?
(none)
RM: Any announcements ?
ben: attending conference in Dubai? any others going?
RM: send to list.
... won't be meeting over Christmas holidays.
<Chuck> +1 to meeting
<Rachael> Draft RESOLUTION: Meet November 22 and 29th
<Ben_Tillyer> +1
<Chuck> +1
<alastairc> +1
<Jennie> +1
<JakeAbma> +1
<ThompsonS> 1+1
laura: +1
<bruce_bailey> +1
<Makoto> +1
RESOLUTION: Meet November 22 and 29th
<Rachael> The AGWG will not be meeting December 20th and 27th and Jan 3rd.
RM: not meeting December 20th and
27th and Jan 3rd.
... do we want to meet the day before CSUN?
Jennie: thanks for the hybrid
option?
... accounting for pre-conf sessions?
<Rachael> Preconference sessions are Monday, March 13 f
rm: part of today's conversation.
<Rachael> Draft resolution: Hold a hybrid meeting at CSUN on Monday March 13th
<Chuck> Chuck (who has voice issues): I support getting together both in person and hybrid, even if it overlaps.
<bruce_bailey> +1
<wendyreid> +1
<Ben_Tillyer> +1
<JakeAbma> +1
<Regina> +1
<Jennie> +1 I can attend hybrid
<alastairc> +1
<Makoto> +1
<Chuck> +1
<sarahhorton> +1
RESOLUTION: Hold a hybrid meeting at CSUN on Monday March 13th
rm: have had rich conversations Want to continue.
5 said: Dividing evaluation type between levels is problematic
rm: 3 said "Dividing evaluation
type between levels is appropriate because"
... reads Gregg's comments.
... reads Stefan's comments.
reads Gundula's comments.
rm: reads Wilco's comments
... reads Jennifer's comments
... reads Jeanne's comments
<Jay_Mullen> +present
rm: reads Bruce's comments
... reads Rachael's comments
... reads Alastair's commnets
bruce: I read the survey choices wrong as I do not know but I do have thoughts.
rm: reads Shadi's comments.
jeanne: want to separate. It is a good idea. but need to do it carefully.
<Chuck> ok
gn: GSG test methods.
... maybe other tests.
gregg: It's not unequal to give
someone mor cloth if they are tall.
... there is going to be equity.
... worried things that are not testable should not be put into
the inequitable pile.
<Zakim> bruce_bailey, you wanted to say that unambiguous machine testing and binary testing (pass/fall) are different and i might have read question incorrectly
gregg: yes we have a problem getting it front of developers.
bruce: unambiguous machine
testing and binary testing (pass/fall) are different and i
might have read question incorrectly
... not a bad characteristic to call out.
wilco: disagree with gregg.
... end results is that needs are not met.
<jeanneS> +1 Wilco
wilco: we did our best with wcag 2. but not the best that we can do.
<bruce_bailey> +1 to wilco's sentiment that we have done wonderful things with wcag2 but may have gone as far as we can with the wcag2 approach
rm: we need to focus on high
interrelater reliability.
... needs meaningful impact and repeatability.
gregg: ease of testing should not
be a concern but reliability should.
... if i can't be done then we can't require it.
<bruce_bailey> +1 to rachael that inter-rater reliability is the big picture goal -- and that does not necessarily mean binary or machine testable
gregg: need to figure out where that happens.
<Wilco> +1 Gregg, fully agree with you there :-)
shadi: feel I'm missing
something.
... agree reliably testable.
... if binary fails we need to figure out a different way.
rm: overlapping concepts.
... how to think about base conformance.
<Rachael> Straw poll: Should ease of testing be used to define base conformance?
<wendyreid> No
<GreggVan> NO
<sarahhorton> no
<jon_avila> No
<joweismantel> No
<jeanneS> No
<Ben_Tillyer> no
<Lauriat> No
laura: no
<shadi> yes
<Jennie> no
<alastairc> Not to define, but it's a factor.
<Wilco> Yes
<kirkwood> no
<bruce_bailey> yes
<Jay_Mullen> yes
<GN015> no, testing methods can be increased and new ideas created
<shadi> +1 to alastairc
<ShawnT> +1 to alastairc
<JakeAbma> yes
<Chuck> undecided
<Lauriat> +1 to alastairc
<maryjom> Yes, it is a factor
<Jay_Mullen> +1 to Alastairc
<Chuck> +1 as a factor
<jeanneS> No, not as a factor. We already know it is a problem
<GreggVan> I worry that if we starrt using "hard' as a factor....
<Rachael> draft RESOLUTION: Simplicity of testing does not define base conformance, but is a factor in it
<Jennie> depends on the weight of the factor.
<jon_avila> +0 to factor
<GreggVan> -1
<GN015> no, as long as feasibly testable
<Wilco> +1
wendy: might be a false
flag.
... base conformance meet the needs of the base of users.
<jeanneS> +1 for simplifying tests
wendy: should be a flag to make
the test simpler.
... it is a factor but not a deciding factor.
ac: difficulty of implementing
and difficulty of testing.
... it is a factor but could have it a t a higher level.
<alastairc> difficulty of implementing and difficulty of testing, this is testing
gregg: concerned about
testing.
... usability testing is only to improve a product.
<kirkwood> ease of testing often depends on scale (of site) that factors in greatly for achievability of accessibility.
gregg: worried about auto
testing.
... need better testing methods.
<Zakim> jeanneS, you wanted to oppose
<Zakim> bruce_bailey, you wanted to ask if inter-rater reliability is the real issue ?
jeanne: agree with gregg.
bruce: looking for highest
inter-relater reilabilty.
... think it is key.
... we need clarity of levels
shadi: we want to scale.
... needs to be withing scope.
<Rachael> Straw Poll: Please order the factors when considering if a test should be in base conformance: ease of testing, ease of implementation, high inter-rater reliability, level of impact, ease of implementation, type of test.
shadi: need to thing about how feasible things are.
<Rachael> Straw Poll: Please order the factors when considering if a test should be in base conformance: ease of testing, ease of implementation, high inter-rater reliability, level of impact, type of test
<Wilco> Can you give them numbers?
<bruce_bailey> give letters
<Rachael> Straw Poll: Please order the factors when considering if a test should be in base conformance: 1) ease of testing, 2) ease of implementation, 3) high inter-rater reliability, 4) level of impact, 5) type of test
<Ben_Tillyer> 4, 3, 2, 1, 5
<GN015> 4, 3
<wendyreid> 4, 3, 2, 1, 5
<Jennie> 4, 3, 2 and 1 together, 5
<shadi> 3, 4, 1/2
<jeanneS> 4,2 (maybe)
<GreggVan> 3 4 2 1 5
<sarahhorton> 4, 3, 2, 1
<Wilco> 0 (Equity), 4, 3, 2, 1, 5
<bruce_bailey> 3 1 4 5 2
<shadi> 3, 4, 1 & 2 (together)
<Jay_Mullen> 4, 2, 3,1, 5
<joweismantel> 4, 3, 2, 1, 5
<Rachael> 4, 3, 2, 1
<jon_avila> 4 3 5 2 1
<JakeAbma> 4, 3, 2, 1, 5
<Chuck> 3, 4
<alastairc> 4, 3, 2, 1, 5
<Lauriat> 4, 3, 2, 1
<Makoto> 3 4 1 2 5
<Chuck> I am visually identifying 3 and 4 getting high marks, the rest then gets more complicated.
Laura 3 ,4 , 2
Jennie: level of impact needs to be defined.
<Ben_Tillyer> +1 to Jennie
<Rachael> +1 to jennie
<kirkwood> 5,3,2,1 4(seems impossible)
sarah: helpful to have level of impact of tests and impact.
RM: reads Gregg's comments.
gregg: could have great impact here.
rm: reads Gundula's comments.
gn: rating adjectival can be more
problematic than rating yes/no.
... At the end, the adjectives need to be evaluated, cumulated,
and put together to a joint result.
<Wilco> +1
gn: But they are incalculable.
rm: reads Wilco's comments.
<kirkwood> +1 to Wilco
rm: reads Jennifer's commnets
jennie: It would be important to publish interrater realiability
rm: reads Jeanne's comments.
Jeanne: find it ironic that I worked in org that had a rubric that had adjectival for WCAG 2X
<jon_avila> As Jeanne says we have to examine a rubric for WCAG 2 to figure out if captions or alt text meet today.
<kirkwood> good point, Jeanne
rm: read's Bruce's comments
bruce: +1 to jeanne
rm: reads Rachael's
comments.
... reads Shadi's comment.
<Lauriat> +1 to Shadi's question
Shadi: we might be talking about different things.
<alastairc> https://docs.google.com/document/d/1lPAl7mddnMnIK5PQbvQqSNcXi_pDorSVxwzratelYR4/edit#heading=h.q7a1p1s14gm5
rm: reads Alastair's comments
Wilco: asking jeanne if she can share the adjectival ratings that were used.
<jeanneS> +1 to prototypes (from Shadi's comment earlier)
Wilco: expectations is that it would increase complexity.
<Chuck> Final Score: 4,3,2,1,5
<shadi> Chuck++
<jeanneS> Wilco, I cannot share proprietary information, but I would be happy to work to build a prototype
rm: (Chair hat off) think it is a mixed bag.
<Rachael> draft RESOLUTION: No decision on adjectival ratins, need to prototype and request examples
<Rachael> draft RESOLUTION: No decision on adjectival ratings, need to prototype and request examples
<Chuck> there's an additional option '0' for equity, that had one mention.
<GreggVan> +1
<bruce_bailey> +1
<wendyreid> +1
<Makoto> +1
<Jay_Mullen> +1
<Ben_Tillyer> +1
<Chuck> +1
<Wilco> 0
<joweismantel> +1
<ShawnT> +1
laura: +1
<Jennie> +1
<Azlan> +1
<sarahhorton> +1
<shadi> +1
<kirkwood> +1
<JakeAbma> +1
<jeanneS> +1
RESOLUTION: No decision on adjectival ratings, need to prototype and request examples
<Chuck> yes, moved conversation forward
<Jay_Mullen> I must drop as well. Apologies. Busy day.
<GN015> I need to drop, bye!
<jon_avila> I can for 5.
<Chuck> scribe: chuck
<Azlan> I too need to drop
<jon_avila> scribe: jon_avila
<Chuck> Final Score: 4,3,2,1,5
Chuck: Found a final score for 4, 3, 2, 1, 5 - honorable mention of additional option 0 for equity. If scored it would be last.
<jeanneS> If equity had been included, I would have voted for it, so I wouldn't consider it last
Rachel: Don't know if we have a resolution - but we should ponder these as criteria we should be considering.
Rachel: How do we handle cumulative errors when number is arbitrary.
<Wilco> scribe+
Rachel: For the necessary - Bruce - accounting is needed - on spectrum with alt text and keyboard - in house folks already account.
Bruce: we are talking about incremental for other issues - we are really already doing it as practice.
<Chuck> The factors when considering if a test should be in base conformance scored in the following priority (high to low): 4) level of impact, 3) high inter-rater reliability, 2) ease of implementation, 1) ease of testing, 5) type of test, 0) Equity
Rachel - from Jeanne - Necessary and problematic - we need some outside expertise - e.g. SEO.
Rachel: Alistair said - we need
some that include cumulative but it depends on scoping tasks
and processes.
... Wilco talked about what is and is not an individual issues
- is it for each word or characters - could create overhead and
testing tools can't test for today.
yes
<Wilco> scribe+
<Wilco> Rachael: ... reading Gregg's comment
<Wilco> Gregg: Errors and bugs should be covered in policy. Sites will have bugs.
<Wilco> ... If you say there's some number, the point at which its no longer accessible.
<Wilco> ... If you have a threshold, its just part of the specification. No more than X number of times.
<laura> s/be be /be /
<Wilco> ... If we're talking about failing once, or ten times on a page. Failing once is a failure.
<Wilco> ... If we have a threshold we should put it in the criterion. If we don't have a number it get qualitative, and it goes into adjectival.
<bruce_bailey> IMHO weak alt and/or poor keyboard support can be cumulatively fatiguing
<jeanneS> It's not a bad idea to put error severity in adjectival
<Wilco> Rachael: Based on responses, it sounds like we all agree aboutwe need something, but it can be problematic.
<Rachael> draft RESOLUTION: Continue exploring cumulative occurrences / thresholds as part of testing
<Wilco> Gregg; If we're talking about cumulative events in a criteria. If its occurrences of a criteria I'm not sure. It just fails.
<jeanneS> +1
<Rachael> draft RESOLUTION: Continue exploring cumulative occurrences / thresholds as part of outcomes
<Wilco> Bruce: When is it so bad we'll fail it as an auditor?
<Wilco> ... What if it's a bunch of low quality alt texts?
<Rachael> draft RESOLUTION: Continue exploring adding cumulative occurrences / thresholds as part of outcomes
<Wilco> Gregg: I don't think we need a resolution that this is in outcomes. We already do.
<Wilco> ... The question is whether we're doing it as part of conformance. It then falls into scoring.
<Wilco> Rachael: I disagree that it's covered.
<Wilco> Gregg: We didn't get to scoring yet
<jeanneS> I disagree that it is covered
<Chuck> I disagree that this is covered, but I agree we can move on.
I don't think we have figured this out yet.
<Wilco> Sarah: Building on something Alastair said earlier. Covering this in the context of task / flows.
<jeanneS> I think it needs concrete prototypes
<laura> s/inter-rater reilabilty/inter-rater reliability/
<Wilco> ... Frequency works very well with the work we're doing in the severity group.
<Wilco> ... The group is also looking at context. An option would be to ask that group to explore this further.
<Jennie> Frequency, duration, intensity,
<jeanneS> +1 to Sarah and building out some examples
<Wilco> Rachael: ... reading from survey.
<laura> s/comments /comments /
<laura> s/reliability /reliability /
<laura> s/comments /comments /
<Wilco> Bruce: I don't think anyone would disagree that tool vendors will have scoring. The question is if we want to put some parameters around scoring, or just ignore it
<Wilco> Gregg: There are two ways scoring can be used. One of them is to score whether or not you conform.
<Chuck> +1 to bruce
<Wilco> ... That's what I think doesn't work. However scoring used in adjectival way, for QA, if you fail to conform its nice to know that you had a score of X, and next year you do better.
<Wilco> ... Scoring is useful outside of conformance, to show progress.
I agree with Gregg that scoring is useful to show progress.
<Wilco> ... Scoring for improvement is great, scoring for conformance is a problem
<Wilco> Wendy: Agree with Bruce and Gregg.
<Wilco> ... Commercial vendors provide scoring. That scoring is inconsistent. That it exists shows that it's required from an implementor's viewpoint.
<Wilco> ... It's a lot easier to say we're at 87%, then it is to say we're really close and have 7 bugs.
<Wilco> ... Having metrics makes this job easier, but scoring for conformance is problematic.
<laura> s/reliability. /reliability. /
<Rachael> draft RESOLUTION: Place parameters around scoring and how it should be done, avoid scoring at base level of conformance, continue to explore whether/how WCAG should use in higher levels of conformance.
<Wilco> Jennie: I could see scoring helpful in terms of procurement. And because I often deal with the rumour mill, when people say there's a number. If there's a number it should be clearly identified.
<Rachael> draft RESOLUTION: Place parameters around scoring and how it should be done, avoid scoring at base level of conformance, continue to explore whether/how WCAG should use scoring in higher levels of conformance.
<Zakim> bruce_bailey, you wanted to mention VPAT was industry response to 508 conformance, and it has been good, but USAB did not require VPAT as part of 2017 Revised 508 Standards
<Wilco> Bruce: In 508, industry came up with a VPAT, which is a way to score things. That happened after the original standards. We thought about requiring VPAT but decided no.
<Wilco> ... Scoring might be the same with WCAG.
<Chuck> +1 to resolution
<Rachael> draft RESOLUTION: Place parameters around scoring and how it should be done, avoid scoring at base level of conformance, continue to explore whether/how WCAG should use scoring in higher levels of conformance and for quality improvement
<Wilco> Gregg: Maybe add "and for quality improvement" to the resolution.
<Chuck> +1
<GreggVan> +1
<wendyreid> +1
<ThompsonS> +1
<Rachael> +1
<bruce_bailey> +1 to Jennie's comment about having to deal with scores (from whomever)
<Wilco> -1
<Jennie> +1
<joweismantel> +1
<bruce_bailey> +1 to resolution
<sarahhorton> +1
+0
<JakeAbma> 0
<jeanneS> 0 - not sure ab out banning it at base level. Needs more thought
<Chuck> I read it the way wilco reads it, and support it.
<Wilco> Wilco: I think scoring should be below the base level, not part of the base level
<Wilco> Gregg: That's what I meant by "for quality improvement" You could say "at all levels"
<Wilco> Rachael: I don't think allowing less then 100% precludes people from running scores.
<Zakim> bruce_bailey, you wanted to ask if some conformance models included machine testability ?
<Chuck> +1 to Rachael, they don't conflict. You can score, and the world does right now.
<Wilco> Jon: I'm losing track of places where it might not have something perfect. Outcome level, scope level... It's hard for me to understand where there's tolerance for things.
<Wilco> ... It's possible it might not be at this level, but missing the full picture.
<bruce_bailey> Did some of conformance models include machine testability? I think that implies scoring at base level.
<Wilco> Jeanne: Agree with Jon. I'm not sure that precluding it from the base level is going to give us the equity we want.
<Wilco> ... If we take scoring out of the base level then we're limiting what can be included in the base level. We're only including things that can't be scored.
<Rachael> draft RESOLUTION: Place parameters around scoring and how it should be done, address cumulative occurrences within outcomes instead of as part of scoring, require a perfect score at base level of conformance, continue to explore whether/how WCAG should use scoring in higher levels of conformance and for quality improvement
<Chuck> +1 to building it out and trying it out, acknowledging Jeanne's valid points on equity concerns.
<Wilco> Gregg: I think the intent is to say whether you conform or not, you can't pick and choose which criteria, and skip others.
<Rachael> draft RESOLUTION: We will try the following, acknowledging it may not work: Place parameters around scoring and how it should be done, address cumulative occurrences within outcomes instead of as part of scoring, require a perfect score at base level of conformance, continue to explore whether/how WCAG should use scoring in higher levels of conformance and for quality improvement.
<Wilco> ... That's the level. But when you take a topic, you would have it at the level. Not to determine conformance, but to show you're working up to conformance.
<bruce_bailey> Draft resolution at 12:33 works for me
<Chuck> +1
<bruce_bailey> 12:34 good too
<Wilco> +1
<GreggVan> +1
<wendyreid> +1
<Wilco> Rachael: Because outcomes can be at page or process level. We can still have for example an outcome that says you can't have more than 20 spilling errors on a page.
<jeanneS> +1
<ShawnT> +1
<kirkwood> +1
<sarahhorton> +1
<bruce_bailey> +1
<joweismantel> +1
<Jennie> +1
<Wilco> ... Acknowledging that this may not work and can revisit
<JakeAbma> +1
+1
<Rachael> adaptive requirements are Context in which content is being used needed to test eg: internationalization
RESOLUTION: We will try the following, acknowledging it may not work: Place parameters around scoring and how it should be done, address cumulative occurrences within outcomes instead of as part of scoring, require a perfect score at base level of conformance, continue to explore whether/how WCAG should use scoring in higher levels of conformance and for quality improvement.
<Wilco> Rachael: ... reading comments.
<Wilco> Gregg: Any time we have tests that change, if it's this, you test it this way, if that, you test it that way. That's just conditional.
<Chuck> Wilco: I clicked wrong thing.
<Wilco> Rachael: I think we should use site context.
<Chuck> Wilco: I think we need better examples, and to Graig's point, you are probably right they are conditional. It's a type of conditional that we need to describe and explore better. We haven't done this before.
<Chuck> +1 to Wilco.
<Wilco> Gregg: Doing it by language sounds conditional. Language X, these are the rules, language Y, those are the rules.
<Wilco> ... I think the word conditional is more descriptive
<Wilco> Rachael: This is the wording we agreed for the time-being
<Chuck> Wilco: It's sort of that organizations can decide on later. We don't know best ways to right text in every single language. Can't decide ahead. Too many languages. We need some other solution.
<Chuck> Wilco: If we bound this approach properly, if orgs describe their apporaches, and have sufficient qualities, and we can come up with quality requirements. That's what is being proposed.
<Chuck> Wilco: Let's not put in standard, but allow orgs to fill in.
<Wilco> Gregg: I understand that problem. In standards work you can't site something as normative that's not also normative.
<Wilco> ... Saying something will be defined later isn't allowed. On WCAG 2 we also had some gaps, things that W3C would later come back to, that never happened.
<Rachael> draft RESOLUTION: Wait to make a decision on adaptive requirements until the subgroup creating prototypes comes back.
<wendyreid> +1
<Chuck> +1
<Wilco> +1
<sarahhorton> +1
<Regina> +1
<joweismantel> +1
<ShawnT> +1
<bruce_bailey> +1
<jeanneS> +1
<Ben_Tillyer> +1
<GreggVan> +1 but document issues so the group can speak to them
RESOLUTION: Wait to make a decision on adaptive requirements until the subgroup creating prototypes comes back.
<Wilco> Rachael: ... reading comments
<Wilco> Gregg: There are reasons for doing one or the other color measure. But if an author uses one, and the evaluator uses the other, so the only way to be sure is to use both.
<Wilco> ... Doing it from different locations, I don't see how that can be done because we can't know where someone is.
<Chuck> +1 to wilco's comments
<Chuck> Wilco: This can really only work if orgs declare what methodology they used. Otherwise the assertion cannot be confirmed. The methods need some quality assurance. We will need guidelines to make sure they are good.
<Chuck> Wilco: There's benefits we can get, however.
<Wilco> Jeanne: I think I don't really understand the difference between adaptive and extensible.
<Wilco> ... We should get better examples
<Rachael> w?
<Rachael> draft RESOLUTION: Wait to make a decision on extensible requirements until the subgroup creating prototypes comes back.
<Chuck> +1
<sarahhorton> +1
<ShawnT> +1
<Wilco> +1
<joweismantel> +1
<Ben_Tillyer> +1
<jeanneS> +1
<wendyreid> +1
RESOLUTION: Wait to make a decision on extensible requirements until the subgroup creating prototypes comes back.
<JakeAbma> +1
<GreggVan> +1 again sending them the issues
<Wilco> Rachael: Our intent for next week or the week after is to break into groups in breakout rooms, and think about, work through the models?
<Chuck> Wilco: For clarity, you mentioned sub-group, this isn't a sub-group, this is breakout rooms.
<Wilco> Rachael: We'll use meeting time to generate ideas and move the work forward.
<Chuck> I have hard stop at top of hour.
<Wilco> Gregg: We should add links to go to the full descriptions.
<Wilco> Rachael: We can add those. We did try to add abstractions.
<Ben_Tillyer> Thanks for the great chairing as usual
<jeanneS> +1 for the chairs and scribes
This is scribe.perl Revision VERSION of 2020-12-31 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: s/Dueby/Dubai/ Succeeded: s/I read the survey wrong/I read the survey choices wrong as I do not know but I do have thoughts/ Succeeded: s/gone as far as we can with that/gone as far as we can with the wcag2 approach/ Succeeded: s/concerence /conference / Succeeded: s/christmas hollidays/Christmas holidays/ Succeeded: s/forr /for the / Succeeded: s/converstions. /conversations / Succeeded: s/commnets/comments/ Succeeded: s/Its not /It's not / Succeeded: s/enquity/equity/ Succeeded: s/be bput /be put / Succeeded: s/inquitable /inequitable / FAILED: s/be be /be / Succeeded: s/it it /it / Succeeded: s/prob /problem / Succeeded: s/charateristic /characteristic / Succeeded: s/diagree /disagree / Succeeded: s/bets /best / Succeeded: s/can./can do./ Succeeded: s/interrelater /inter-relater / Succeeded: s/meaniful impacct /meaningful impact / Succeeded: s/reliablity /reliability / Succeeded: s/testiable./testable. / Succeeded: s/agree /agree about/ Succeeded: s/conformace /conformance / FAILED: s/interrelater reilabilty/inter-rater reliability/ Succeeded: s/feiaable /feasable / Succeeded: s/cumulated /cumulated, / Succeeded: s/togteher /together / FAILED: s/commnets /comments / FAILED: s/realiability /reliability / Succeeded: s/rubic /rubric / Succeeded: s/commnets /comments / FAILED: s/commnets /comments / Succeeded: s/interrelater /inter-rater / FAILED: s/reilabilty. /reliability. / Succeeded: s/feasable /feasible / Succeeded: s/resut./result./ Succeeded: s/commnets/comments/ Succeeded: s/imporant /important / Succeeded: s/realiability/reliability/ Succeeded: s/commnets/comments/ Succeeded: s/reilabilty./reliability./ Default Present: Francis_Storr, jeanne, bruce_bailey, jaunita_george, joweismantel, JakeAbma, SuzanneTaylor, Detlev, olivia-hogan-stark, Raf, sarahhorton, Wilco, ShawnT, Laura_Carlson, mbgower, jon_avila, kirkwood, Lauriat, maryjom, alastairc, Jennie, MichaelC, Jay, iankersey, wendyreid, Caryn, Poornima, ToddL, JenStrickland, ShawnLawtonHenry(first_part), Ben_Tillyer, GreggVan, Rachael, Chuck, shadi, Azlan, Makoto, present, StefanS Present: Francis_Storr, jeanne, bruce_bailey, jaunita_george, joweismantel, JakeAbma, SuzanneTaylor, Detlev, olivia-hogan-stark, Raf, sarahhorton, Wilco, ShawnT, Laura_Carlson, mbgower, jon_avila, kirkwood, Lauriat, maryjom, alastairc, Jennie, MichaelC, Jay, iankersey, wendyreid, Caryn, Poornima, ToddL, JenStrickland, ShawnLawtonHenry(first_part), Ben_Tillyer, GreggVan, Rachael, Chuck, shadi, Azlan, Makoto, present, StefanS Regrets: Suzanne, Detlev Found Scribe: Laura Inferring ScribeNick: laura Found Scribe: laura Inferring ScribeNick: laura Found Scribe: chuck Found Scribe: jon_avila Inferring ScribeNick: jon_avila Scribes: Laura, chuck, jon_avila ScribeNicks: laura, jon_avila WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]