“There isn’t some way to know when—…?”
There is always a pause here. The client knows what they’re asking, and I know what they’re asking, but putting it into words—saying it out loud—turns unexpectedly difficult.
In the moments before the asking, it was a purely technical question—no different from “can we do this when a user is on their phone.” But there’s always a pause, because this question doesn’t come easy; not like all the other questions about browsers and connection speeds did. A phrase like “in an assisted browsing context” doesn’t spring to mind as readily as “on a phone,” “in Internet Explorer,” or “on a slow connection.” The former, well, that’s something I would say—a phrase squarely in the realm of accessibility consultants. The latter the client can relate to. They have a phone, they’ve used other browsers, they’ve been stuck with slow internet connections.
“There isn’t some way to know when—… a user is… using something like a screen reader…?”
An easy question that begets a complicated answer is standard fare for almost any exchange with a web developer. This answer has, for a long time, been a refreshing deviation from that norm: “no, we can’t.”
The matter is, I’ll offer, technically impossible; computers, you see, can’t talk to each other that way. Often, there’s a palpable relief here: “no” to the technical part; “no” to the the computers part. That is, of course, all they had meant to ask. I truly believe that.
Even if we could, I’ll explain, we wouldn’t really want to. Forking our codebase that way would put more burden on us maintainers, not less. There’s an easy parallel to the “when they’re on a phone” conversation, here; one we’ve surely had already. We can never know a user’s browsing context for certain, and making assumptions will only get us and our users into trouble. Whenever a feature, component, or new design treatment was added or changed, we’d be left having all the same conversations around how to translate it over to the “accessible” experience. If those features aren’t essential in the first place, well, are they worth having at all? If those features are essential—well, we’ll still need to find a way to make them work in both contexts.
It could seem like an enticing option for our users, at first glance: an enhanced, fully-featured website, on the one hand, a fully accessible alternative experience on the other. That unravels with even the slightest examination, though: if the fully-featured website isn’t accessible, the accessible website won’t be fully featured. By choosing to have the “accessible experience” deviate from the “real website,” we end up drawing a sharper line between those two definitions, and we nudge the “accessible experience” closer to an afterthought—limited and frustratingly out-of-sync with the “real” website, like so many dedicated mobile sites quickly became.
There’s never any disagreement, here. Again: this is all relatable. We’ve all found ourselves inescapably opted into using the “mobile” version of a website at some point. We’ve been here before as users; we’ve made these mistakes before as developers. We know better now.
But this isn’t a strictly technical question. This isn’t as simple as browser features and screen sizes—a question of one privileged browsing context or another. Technical questions come easy. Partway through the asking—in the hesitation, in the pause, in the word stumbled over—what was meant to be a mundane development question became something much more fraught. Because there was a word that fit.
“Is there a way we can know when a user has a disability?”
The easy “no” felt empowering; a cop-out. “It doesn’t matter; it can’t be done” in response to a deeply fraught question was an unexpected balm for both the asked and the answered. There was, again, that palpable relief—”no” to the technical part; “no” to the the computers part. That was, of course, all they had meant to ask.
We no longer have that easy answer. In iOS 12.2 and MacOS 10.14.4, a toggle switch has appeared in Apple’s VoiceOver preferences, innocuously labeled “accessibility events.” It was rolled out to no fanfare—short of a brief mention in Apple’s iPhone User Guide—and we’re still not sure how it’s meant to be used. The most generous interpretation of the intention behind this feature is that it was developed with the same intention as a “UA string”-style identifier for users browsing via VoiceOver.
We do know this much: when this setting is enabled—and it is, by default—your browser will identify you as using VoiceOver to help you browse the web. If you’re using Apple’s VoiceOver, both your phone and your computer will broadcast your assumed disability to the entire internet, unless and until you specifically tell it to stop.
Update May 2019: They yanked it.
If you’re not furious at this change, you should be—not just for what it means for users, but what it foists upon you. Apple has burdened you with the knowledge that, now, yes, you can know whether a user has a disability. We can use this information to serve up a limited alternative version of a website, into which we can very easily opt people of a protected class. And once we choose to start listening for “accessibility events,” well, we can capture that information, as anything else broadcast to the web. A user’s disability can and will be reduced to a single data point—a cold, impersonal true
, inexorably bound to their name, stored in a database, perhaps destined to be sold, leaked, passed along to insurance providers, reduced to a targeted marketing opportunity. All under the auspice of inclusivity.
At some point, the developers responsible for the “accessibility events” feature were, I’m certain, asked whether such a feature were possible. Their answer was “yes.” I don’t doubt that they meant well. I’m just as certain that, in the moment, it felt like the right answer; a technical solution to a technical problem, and a simple matter of browsing context.
Someday—not far in the future, I trust—I’ll be asked a similar question. It will be asked hesitantly, haltingly. The pauses will ring all too familiar. I will no longer have the easy, familiar comfort of technical impossibility—no easy “no” to insulate me from the uncomfortable conversations I should have been having with clients all along. Now, there’s no technical reason that I can’t know whether a user is using “something like a screen reader.” I—my clients, their databases, their organizations, their parent companies, their partners, their VC funders, their advertisers, and so on unto infinity—can absolutely know when a user is disabled.
But I won’t play a part in helping to propagate the mistake Apple’s developers made. I’ll let my answer hang heavy and uncomfortable in the air: no. Not because we can’t—we can. Not because we shouldn’t, though, no, we still shouldn’t. No—now, I will allow the word to become as coarse as I had always wanted it to be, because I no longer have the cold comfort of “well, technically” to hide behind.
No.
Sorry. Why is this a bad thing? Surely, it is a good thing that developers can now respond to the needs of those with accessibility issues?
it’s a bad thing because instead of designing and builiding 1 interface that works well for everyone developers will build 2 separate interfaces, and more often than not the accessible version will be a stripped down basic version that never gets maintained because the developers will likely forget about it.
Developers will also be inclined to make the accessible version go to the extreme, and basically, assume every disability in that interface. “you’re using a screen reader? well you probably need high contrast mode turned on too”.
In addition to what was said about advertising, while on the surface it might seem okay to collect the data for analytics reasons, it very quickly becomes a systemic method of segregating users and quantifiably trying to justify providing a terrible experience for people using these tools and settings.
It’s because we should be responding to user needs already. We shouldnt need to be prompted to make things accessible.
Think about the parallel of a building. We shouldnt just throw a ramp out on the front steps of the building because we see someone for whom stairs might be a difficulty. We have a ramp there permanently.
There are a couple of reasons why this is problematic. First and foremost the internet is one of the few places where someone with a disability can exist on their own without having to worry about the world treating them differently. Secondly, it is our job as developer to build experiences that are universally accessible to everyone and that is something we actually have to try hard to do wrong.
Semantic HTML, proper thought into things like color contrast, font sizes, tab order and content flow, are all things that help contribute to making a site accessible. The reality is that we shouldn’t have to rely on knowing if someone is using a screen reader or not to deliver a good experience to them.
It seems like a feature right? Like now I can finally make a separate tailored experience for people with different abilities. I get where you’re coming from.
Now we have the power to make a side door for some people, where everyone used to have to come in the front door. And the people who will be making these decisions like project managers, stake holders, and developers will probably be able bodied. Which means that able bodied people will be asking people with other abilities come through this side door. And that feels gross.
The internet and every website in it should be accessible to every one as a rule. Period.
“if (personDisabled) { print crappierSite }” is a slippery slope.
Accessibility must be embedded as default not as an after-thought.
Besides, what if the person uses Android not Apple?
What if the Apple user disables the setting?
And this is not even going into the much dodgier topic of privacy and surveillance.
This information can (and probably will, because humans are trash) fall into the wrong hands.
Health insurance might charge additional fees, employers might use this info to choose future employees they perceive (ugh) are “more able”, etc. This could be used for targetted ads and even political campaigns.
This is profiling at its worst.
Now developers can no longer deny: There are users with disabilities out there and they visit our sites. When your site has below average visitors with disabilities – is it just the topic or is your website that bad? Finally start blaming yourself, not people with disabilities.
A poor analogy: What if developers would get to know how often users activate the reader mode on their website (like I did for this article)? Or what the preferred zoom level is (me: 110% on css-tricks.com)? Should they then create a separate, simplified readability version? No! Better think about making the default website more readable.
It is your responsibility to take away hurdles, not to deny access to “the real thing”. Become aware that everybody is disabled to a certain degree at least temporarily – there is no on or off. And be sure you won’t recognise every disability – online and offline. Be a developer that advocates all users.
adding to the conversation already going on – I recently watched a video where a guy explained web 3.0 to a room of developers, and his main points hold true here too.
Stop thinking about your current problem and think about what effect your “solution” will have on the future of the web for everyone. In that talk the guy was talking more about privacy and security. This completely applies to accessibility as well.
We are building the tools today that shape the future tools several decades from now. If we build the web in such a way where it puts someone at a disadvantage from everyone else.
That will carry on down the line decades from now hurting millions if not billions of people.
If we build the web in such a way that it benefits everyone from the get-go, preventing segregation, while building tools that make making a better experience easier for everyone. Then we’re going to build a better web for everyone, not just the 30% or less that have perfect eyesight, hearing, motor function, lack of epilepsy, etc.
Thank you for writing this article and thanks so many of you for the thoughtful and intelligent responses. I know there’s been a lot of back-and-forth on the topic within the blind community. The latest thing I had heard was that the NFB is excepting apples explanation that the default switch that is on does not in fact identify users because there’s another setting regarding events that also has to be turned on. Not sure if I agree with that. Has anyone heard an explanation from Apple that makes any sense?
As an engineer, and as a screen reader user, I’m very wary of the privacy implications of screen reader detection. That said, it’s important to understand this new feature in Apple devices properly, and in the proper context.
Accessible events (like accessibleClick or accessibleFocus) only fire when an Assistive Technology (AT) is running. This makes it possible to detect that an AT is enabled, but not which type of AT.
So listening for these events won’t tell you that a screen reader (as opposed to any other type of AT) is running. This limits the design choices that can be made based on this information.
It’s also worth noting that although accessible events are enabled by default in the OS, they are not at present enabled in Safari. Users can choose to do this if they wish, but it isn’t enabled by default.
Apple acknowledges that these events are an experimental feature. They were first suggested in an early version of the Accessibility Object Model (AOM) specification, but have since been dropped due to privacy concerns raised by myself and others.
Thanks for your post. I agree.
We should be responding to a user’s stated preferences. Not how they appear to be using the technology!
We need to develop good personalisation solutions to support preferences like I prefer keyboard access to pointer, I don’t want self voicing apps, I want easy to read content without animations.
It looks like Apple will remove the accessibility events switch in iOS12.3.
Looks like this feature is still available, but now it is under Safari > Advanced > Experimental Features > AOM
https://support.apple.com/en-us/HT209655
The argument for this feature, from what I can tell, is to achieve parity with native apps, which already have access to accessibility-specific events. i.e, “accessibilityActivate(): The accessibility system calls this method when a VoiceOver user double taps the selected element.”
https://developer.apple.com/documentation/objectivec/nsobject/1615165-accessibilityactivate#discussion
https://developer.android.com/reference/android/view/accessibility/AccessibilityEvent.html#TYPE_VIEW_CLICKED
Apps, however, do have to go through a manual review process, which includes this guideline:
(vi) Data gathered from the HomeKit API, HealthKit, Consumer Health Records API, MovementDisorder APIs, ClassKit or from depth and/or facial mapping tools (e.g. ARKit, Camera APIs, or Photo APIs) may not be used for marketing, advertising or use-based data mining, including by third parties. Learn more about best practices for implementing CallKit, HealthKit, ClassKit, and ARKit.
https://developer.apple.com/app-store/review/guidelines/#data-security
None of those APIs are exposed to the web, except camera/photo.
+1 on the write-up, Mat.
For UI and UX designers so accustomed to doing things their (same) way, Accessibility might seem like a burden, but once you get more into it, turns out it is a blessing.
Accessibility forces you to be more empathetic, (finally) describe things (said to be one of the hardest things in programming), and with a bonus of making your pages search engine friendly and getting a broader audience/market share by including everyone.
Remember the uproar (in the design community) when the late Steve Jobs decided to kill Flash?
I think instead of a global setting, it could be part of the permissions API, so you get a setting per website.