Web developers nowadays have full insight into how browsers work. The engines are open source, and you have programmatic access to almost anything they do. You even have in-built developer tools allowing you to see the effects your code has. You can see what the render engine does and how much memory gets allocated.
With assistive technology like screenreaders, zooming tools and voice recognition you got nothing. Often, you even don’t have access to them without paying.
Closed source stalled the web for years
For years, closed browsers held back the evolution of the web platform. When I started as a web developer, there was IE, Netscape and Opera. The DOM wasn’t standardised yet and it was a mess. Your job as a web developer was to 90% knowing how which browser failed to show the things you tell it to. Another skill was to know hacks to apply to make them behave. Starting with layout tables and ending with hasLayout and clearfix hacks.
It is no wonder that when Flash came about and promised a simpler and predictable way to build web applications, people flocked to it. We wanted to build software products, not fix the platform whilst doing that.
This has been an ongoing pattern. Every time the platform showed slow-ish improvement due to browser differences, developers acted by building an abstraction. DHTML was the first bit where we created everything in JavaScript detecting support where needed. And later on jQuery became the poster child of abstracting browser bugs and DOM inconvenience away from you, allowing you to achieve more by writing less.
Open source browsers are amazing
Fast forward to now where every browser is based on some open source engine or another. Developers get more insight into why things don’t work and can contribute patches. Abstractions are used to increase developer convenience and to offer more full stack functionality than only showing a page. The platform thrives and there are new and exciting innovations happening almost every week. Adoption and implementation of these features is now in a several months timeframe rather than years. A big step here was to separate browsers from the OS, so you don’t need to wait for a new version of Windows to get a new browser version.
Assistive technology is amazing, but we don’t know why
And then we look at assistive technology and things go dark and confusing. If you ask most web developers how screenreaders work and what to do to enable voice recognition things get awry quick. You either hear outdated information or that people haven’t even considered looking at that.
Being accessible isn’t a nice to have. It is a necessity for software and often even a legal requirement to be allowed to release a product.
At Microsoft, this was a big part of my job. Making sure that our software is accessible to all. And this is where things got annoying quick.
There are ways to make things accessible and the best thing to start with is to use semantic HTML. When you use a sensible heading structure and landmarks, people can navigate the document. When you use a button element instead of a DIV, it becomes keyboard and mouse accessible. At least that’s the idea. There are several browser bugs and engine maker decisions that put a spanner in the works. But on the whole, this is a great start. When things get more problematic, you can use ARIA to describe functionality. And hope that assistive technology understands it.
I’ve been blessed with a dedicated and skilled team of people who bent over backwards to make things accessible. The most frustrating thing though was that we had a ton of “won’t fix” bugs that we could not do anything about. We did all according to spec, but the assistive technology wouldn’t do anything with the information we provided. We had no way to fix that as there was no insight into how the software worked. We couldn’t access it to verify ourselves and there was no communication channel where we could file bugs.
We could talk to the Windows Narrator team and fix issues there. We filed bugs for MacOS Voiceover with varying success of getting answers. We even submitted a patch for Orca that never got merged. And when it came to JAWS, well, there was a just a wall of silence.
Ironically, assistive technology isn’t very accessible
Every accessibility advocate will tell you that nothing beats testing with real assistive technology. That makes sense. Much like you can simulate mobile devices but you will find most of the issues when you use the real thing. But assistive technology is not easy to use. I keep seeing developers use a screenreader whilst looking at the screen and tabbing to move around. Screenreaders have complex keyboard shortcuts to navigate. These are muscle memory to their users, but quite a challenge to get into only to test your own products.
Then there’s the price tag. Dragon natural speaking ranges from 70 to 320 Euro. JAWS is a $95 a year licence in the US only, or a $1110 license for the home edition or $1475 professional license. You can also get a 90 day license for $290. So, I could use a VPN to hack around that or keep resetting my machine date and time, much like we did with shareware in the mid-90s?
I’ve been to a lot of accessibility conferences and kicked off the subject of the price of assistive technology. Often it is dismissed as not a problem as people with disabilities do get them paid by their employers or government benefit programs. Sounds like a great way to make money by keeping software closed and proprietary, right?
Well, NVDA showed that there is a counter movement. It is open source, built by developers who also rely on it to use a computer and responsive to feedback. Excellent, now we only need it to be cross-platform.
When you look at the annual screenreader survey, JAWS has 53.7% of the market, followed by NVDA at 30%. The assistive tools that come for free with the operating system are hardly used. Voiceover on MacOS is 6.5% of the users and Microsoft Narrator even 0.5%.
The reason why these newer tools don’t seem to get as much adoption is that people used JAWS for a long time and are used to it. Understandable, considering how much work it is to get accustomed to it. But it means that I as a developer who wants to do the right thing and create accessible content have no way to do so. If JAWS or Dragon Natural Speaking doesn’t understand the correct markup or code I create, everybody loses.
Access needs to me more transparent
I really would love open source solutions to disrupt the assistive technology market. I’d love people who can’t see getting all they need from the operating system and not having to fork over a lot of money just to access content. I want Narrator and Voiceover and Orca to be the leaders in innovation and user convenience. NVDA could be an open source solution for others to build their solutions on, much like Webkit and Chromium is for browsers.
But there is not much happening in that space. Maybe because it is cooler to build yet another JavaScript component framework. Or because assistive technology has always been a dark matter area. There is not much information how these tools work, and if my experience as a developer of over 20 years is anything to go by, it might be that this is by design. Early assistive technology had to do a lot of hacking as the operating systems didn’t have standardised accessibility APIs yet. So maybe there are decades of spaghetti code held together by chewing gum and unicorn tears people don’t want to show the world.
from Christian Heilmann https://christianheilmann.com/2023/08/05/assistive-technology-shouldnt-be-a-mystery-box/