Meeting minutes
Chuck: part of the purpose for this call is to discuss lessons learned.
Rachael: We had some good discussion around collaborating with docs.
… Please add your comments
Chuck: Working with smaller groups has been really enjoyable; looking forward to more opportunities to do that.
… We worked on several of these guidelines yesterday.
alastairc: Control and Focus Appearance Guideline
… Scope comes down to: what do controls look like, including when you interact with them. A bit of focus visible, focus appearance. Target size we put to one size as Patrick and Wilco had incorporated that into their pointer input work the other day. We also thought target size isn't always related to visual appearance.
… We found some research. It's quite difficult to find anything to do with focus that isn't starting from WCAG 2.
… There are basic things like "if someone's using the keyboard, they need to be able to see the focus". We took some user stories from COGA's Content Usable document.
… Taking the assumption that if general affordances are good for the overall population, it's even more helpful for people with cognitive impairments.
… Story: I need to know how to use all of the controls and the outcomes of actions.
… Need to not confuse different types of controls.
… You should not be able to confuse any custom control with any platform control.
… The appearanxe of custom controls shouldn't change substationally from the platform one.
… Custom controls for which there's no platform control should have substantially different features.
… This is a new approach but we are exploring it.
<Lauriat> Control and Focus Appearance Guideline Scratchpad
alastairc: As much as possible, controls' usage should be obvious from their appearance.
… Story: I need to know what is a control and what isn't.
… There needs to be a differentiator for controls.
… At sliver level, require some form of hover _and_ focus effect.
Story: As a non-mouse user, I need to be able to see what's currently focused.
… A reworking of WCAG 2 ones. Some added, such as focus being restored to a @@ location when, e.g. dismissing a dialog.
alastair: We covered eye tracking and voice input with other tests.
alastairc: [scribe missed the last one, which was a new one around focus management after clicking off from a control]
<Rachael> Provide help scratchpad: https://
Rachael: There's a lot of research on help. Need to focus on what's needed at what level.
… There may be an assertion requried that user testing was conducted to ascertain help was actually helpful.
… Work on context sensitive help, including the meaning of words; purpose of controls; required field indicators or other standard [non-AAC] symbols.
… Providing user prompts/wizards: help completing tasks.
… User need of human help: need to know whether interacting with a human or computer. Need to know how to reach a human being. If having a memory impairment, need to be able to contact a human immediately.
… Speech/hearing need: Need a way to reach human help in multiple ways, or with AT.
… Need a mechanism for initiating in-page interactive help.
… Need: assistance with data entry (spelling; suggested input)
… Outcome: providing support during task completion - e..g inline help completing forms.
… Need: persistent instructions.
… Outcome: provide detailed documentation (not necessarily integrated into the interface).
… Need help to get to docs that may have bigger scope than the current page.
… Outcome: Wayfinding
… Need: Help with when an interface is different/new.
Rachael: Outcome: Consistent help
… Need consistent ways to get to help, and consistent vocab/terminology within the help.
… Need adaptable and personalizable help; UI to be simplified to start with.
… Talked about whether digital assistants are more effective when they are human-like.
… Need instructions to not rely on sensory characteristics.
… Need search features to be free of jargon and provided in a consistent plaxe.
… Outcome: Providing help with data visualizations and non-text information.
… We organized the outcomes. Our next steps are figuring out which of these overalp and doing a clean-up pass before creating the PR.
<Chuck> matatk: Some of those needs that Rachael covered, are actually things that we have a sound architecture for resolving in our task force, watch this space. We'll come to you for review.
<Chuck> Rachael: Thanks, add to this doc.
<Chuck> matatk: We'll keep in touch with you in WAI CC.
<Zakim> Chuck, you wanted to ask about computer/human distinction
Chuck: Regarding the distinction of human/computer help (which may be AI). Not disputing the need to know the distinction. When the distinction is known... what happens?
<Patrick_H_Lauke> If i know i'm talking to a robot I know that I can just use keyword-based inputs, rather than forming full sentences
Rachael: Beneficial for people with anxiety for communication challenges; sometimes there's a positive aspect to knowing it's a robot. Also if peole know they're talking to an AI they tend to modulate their communication accordingly.
<Jennie_Delisi> +1 to Rachael's description
<Patrick_H_Lauke> Ask ChatGPT...
kirkwood: Definitely different anxiety levels when communicating to a person or a machine; around things like needing to repeat questions or not, and other factors.
Lauriat: We went through research and found a ton of different materials. Not a whole lot on easily discoverable direct research studies.
… Things like captions are not controversial.
… We came up with 18 categories. Skipping to outcomes...
… 1. Providing clear mechanism for enabling media alternatives.
… 2. Mechanisms for finding media that have the desired alternatives available.
… 3. Audio and text descriptions are available for non-visual access to visual content. This started as audio description. However we also included text description of the video.
<Patrick_H_Lauke> Pleased to note that we moved away from "alternative" to "description". The former always implied some form of 'primacy' of the visual/audio-visual material, while the latter is more set along the same footing
Lauriat: Bunch of aspects of quality around this. E.g. the order in which you describe things; extended AD; consistent terminology; AR/VR - incorporating spatial info;
… the language itself: e.g. if you're watching a film with a scene in a different language, that needs to be reflected (difficult for burned-in subtitles).
… 4. Text alternatives to info conveyed by the audio track. Captions, transcripts, synchronized and nort.
… 5. Audio alternative in lang or preffered language. I need an alternative tot he audio _and_ in a different language, which includes sign language.
<Patrick_H_Lauke> (ah, spoke too soon about 'alternative' vs 'description'. was only partially relevant in this discussion, probably more for images etc though)
Lauriat: Providing captions and providing translating captions. Providing sign language that's synchronized. The underlying need is the same.
… 6. Audio alternative with extra functionality toh help me understand the contnet. e.g. enhanced captions.
… 7. Locating captions / Captions in space. They may be behind you!
… 8. Media alternatives convey color perception information. This came from the Media Accessibility User Requirements (MAUR)
9. Info conveyed by sound is conveyed by non-audio means.
… Along with traditional media things, we also need to cover things like earcons.
… The test for this is particularly abstract.
… 10. Synchronized media: ensuring they're actually synchronized.
… 11. Separating audio tracks. E.g. backgorund music from dialogue. Being able to mute everyone except a primary speaker. Pitch shifting and speed adjustments. Also the same for video tracks, which is conceptually the same.
… e.g. you may want to lower the complexity of the video infomration; have a shorter version etc.
… Didn't get through the tests for those.
<Patrick_H_Lauke> earcons, audio signals etc were covered in 1.1.1 under WCAG 2.x
<Chuck> matatk: I've got 11 outcomes, we need to check the minutes. On the media accessibility user requirements, we are in the process of updating those in our TF.
<Chuck> matatk: Various groups are working to update the media user requirements. Sync requirements which talks to this requirement.
<Chuck> matatk: We didn't think it would be possible to make these normative, there are so many stages in the media pipeline.
<Chuck> matatk: May be interesting research.
Lauriat: Determining the quality of these outcomes is interesting; depends on the type of persson.
<chaals> [separating audio tracks, looking for simpler versions of video often isn't wildly complex technically if the requirements are known and can be catered for in production]
Lauriat: E.g. some people with cognitivie disabilties that need AD that reinforce what's happening in the video; another need is for someone who can't see the video, they may need different level of detail/terminology. We included both of those as the same user need in the outline of outcomes. Fundamental need is the same, but both of those cases, as you're going through the tests and thinking about quality, you
can think of quality for the differnt personas.
Janina: We promised MEIG a fairly aggressive timeline for updating the MAUR to 2.0. Very interesting points Lauriat. We have some different vocab and you have some ponits we don't hace in the MAUR. We're going to want to stay co-ordinated.
<Zakim> mbgower, you wanted to say I can anticipate the desire/need to have variable descriptions over time, something we'll likely get to with AI
Janina: We need to update soon, as the various media organizations are then going to drum up support for meeting them.
<kirkwood> thats too many times
mbgower: There's going to be a lot of scope for granularity in descriptions. E.g. the first time I watch a film, I need more AD, and on subsequent viewings, need less AD to enjoy it the same. Did you identify that need?
<jon_avila> Yes, we talked about granularity - this will be needed with extended reality. Also, including on demand descriptions
Chuck: Let's continue that conversation outside of the call.
Chuck: Moving on to Keyboard Support.
<Chuck> https://
<Lauriat> https://
kevin: High-level summary: we managed to capture user needs. Didn't get to outcomes. There's a lot of overlap with that, pointer events and other related keyboard work that we spotted.
… One question we had was: where do we draw the line between the user, the keyboard and the content.
… Lots of different keyboard interfaces: physical, virtual, XR, ...
… We focused on the keyboard to the content, rather than the user-to-the-keyboard stage.
Rachael: Congratulations; what an incredible amount of work we've got done. When we came to TPAC, we had 2 that were PRs; 2 not started. Keyboard may not make it to outcome, but almost every other guideline has got to the stage of outcomes or a PR. Pat yourselves on the back!
… The next step is to keep this momentum up. Not sure if we'll pool time on Tuesdays, or ask subgroups to branch out. Goal is PRs by the end of October.
… The initial list of guidelines was a plaeholder; not a final one.
… We should take all of the outcomes and do a card sorting exercise to identify overlap and grouping into a new set of guidelines.
Rachael: The goal is to get something out there that people can comment on (what have we missed; what haven't we?)
… We want to test all the maturity models based on work we did at CSUN, last fall, Silver TF, etc., and test them out.
… Need to go to governance for more input, but we are making good progress.
Chuck: Mentioned overlap: in every subgroup I've been in there's been overlap with other subgroups. E.g. timing and interruptions was partiularly interesting.
Patrick_H_Lauke: Wanted to reinforce the point about overlap and listening to kevin, it ties in with discussions we've had in pointer interactions. I feel there's a natural level up from what we've been discussing this week, to holistic user needs for input.
… Regardless of input mechanism. As a user, I want to be able to select or say I'm interested in something; activate that something, and then splitting into input method specific GLs that share the same "upstream" user needs - whether I'm using eye tracking in 3D to select something, or a keyboard.
… Was good to see these commonalties happen this week. Glad to have chance to reorganize these things in this more logical way, rather than patching/bolting things on.
<Rachael> https://
Rachael: Please update the schedule, to help us continue work on this.
… Please edit the wiki directly to let the chairs know who did what.
Chuck: TPAC isn't done! In other timezones we are continuing work on two other guidelines soon, after a break after this call. If you're interested in contributing, let us know.
… When people have been coming in/out it's not been as disruptive as I thought; fresh perspectives were refreshing and encouraging.
… Will be doing self-assigned as well.
Chuck: That work begins in 35min!
… Thanks for attending and we look forward to continue working together.