The year is 2023. Accessibility has been included in discussions around the world-wide web since Tim Berners-Lee talked about it in 1994. And here we are nearly 30 years later, using the web for all sorts of things that Berners-Lee couldn’t have imagined. One of those things is online learning platforms which allow fully remote students to easily participate in class discussions, read course materials, and submit assignments.
At least, it’s supposed to be easy. But accessibility on the web has always been an afterthought, something bolted on after a web application is released rather than a core design principle. Usually the focus is on people with visual disabilities, and the afterthoughts include adding “picture” as the text description for anatomical diagrams and things like that. They’re half-assed at best, since the word “picture” isn’t going to convey the actual information contained within the picture. The checkbox on the accessibility evaluation form says “do images have text descriptions for the visually impaired?” There typically isn’t a requirement that the descriptions be meaningful. And that’s usually as far as they get.
I don’t have trouble with my vision, but I do have a lot of trouble keeping up with people when they’re talking. This is a particular issue when watching video lectures, where my professor is explaining technical concepts very quickly. The learning platform we use does not have the option to play videos at reduced speed so that it’s easier to keep up. Nor is there an option to turn on subtitles. Some of my professors include a written transcript of the videos they post, but not all do. And one doesn’t read from a script during his lectures, and uses software to try to convert the audio portion of the lecture into text which he posts without correction. The results are poor. There are obvious difficulties around having a text document with no timing information completely separate from the video showing the concepts being explained; keeping in sync to understand which explanation goes with which diagram in the video can be extremely challenging. But using software to automatically generate a “transcript” makes the experience so much worse, because not only does the text not match up with the audio anymore, but sometimes it’s gibberish, or on at least two occasions factually inaccurate in ways I wasn’t initially able to detect. This adds additional stress to an already stressful situation. Now I have to double-check every fact in the provided text for accuracy before I can use it, and that’s when text is supplied at all.
This semester, I once again had the opportunity to complete an assignment consisting of a series of slides with voiceover narration. I’ve written about the problems with this several times. It hasn’t gotten easier. It took me the same amount of time to craft my slides as it took my classmates, but generating the audio and fitting it to the slides took many hours. And just as in previous assignments, I took the extra step of taking the script for the presentation and making carefully-timed subtitles to help make the video more accessible. This isn’t a requirement for the assignment, but it’s something I feel is important if for no other reason than it draws attention to an accessibility concern. I decided to take the opportunity to explore what options our learning platform provided for subtitles, to see if the professors could use them but chose not to or if the platform itself didn’t support them. My findings were disappointing.
There are several ways to upload video content to this platform. All of them are broken to various degrees. None of them support video files with subtitle tracks, nor have the ability to upload a separate subtitle file, nor the ability to manually enter subtitle information. In fact, video files are “processed” by the platform, presumably to decrease the file size to save on bandwidth cost, and this process actively removes subtitle information. That means that anyone who downloads the video to watch with a real player can’t use them either.
When I raised this issue with the platform’s tech support department, I was told not to worry, that in the future they would release a brand new feature where subtitles would be… automatically generated. Which leads to the same accessibility issues I identified earlier. When I pointed out the problems with that approach, I was met with silence.
Because this video is being reviewed not only by my professor but by my classmates as well, and that review is part of our grades, it was extra important to me to ensure that my classmates had as accessible an experience as I could provide under the circumstances. So I provided the video, along with a second copy of the video played at reduced speed, and a full correct transcript. And I offered to send my classmates, via means other than this platform, the video file that includes the subtitle track if needed.
The web needs to be accessible. Online learning platforms in particular have a duty to make accessibility a primary focus of their products. And MLIS programs should absolutely make real accessibility a requirement when selecting a platform to adopt. Doing anything less is reinforcing systemic ableism and restricts access to education and information, which is antithetical to the ethics of librarianship. And that is what I will continue to advocate for.