Welcome to Press This, the WordPress community podcast from WMR. Each episode features guests from around the community and discussions of the largest issues facing WordPress developers. The following is a transcription of the original recording.

Powered by RedCircle

Doc Pop: You’re listening to Press This, a WordPress community podcast on WMR. Each week we spotlight members of the WordPress community. 

I’m your host, Doc Pop. I support the WordPress community through my role at WP Engine and my contributions on TorqueMag.io. You can subscribe to Press This on RedCircle, iTunes, Spotify, or your favorite podcasting app.You can also download episodes directly from WMR.fm

Today we’re diving deep into a topic that’s not only cutting edge, but also crucial for making the web more inclusive: AI and accessibility and how those two things can work together. 

I’m thrilled today to be joined by Amber Hines, the CEO of Equalize Digital, who recently impressed our audiences with her keynote at DE{CODE} 2024 on the potential of AI and making websites more accessible.

In today’s conversation, Amber and I will be exploring those promises and perils and how you can leverage generative AI, large language models, and everything else to enhance accessibility on WordPress websites. Amber, thank you so much for joining us today.

Amber Hinds: Yeah, thanks for having me.

Doc Pop: Let’s start this off with that DE{CODE} talk that happened last week as we’re recording.

And I got a chance to watch the whole thing. I really enjoyed it. I’m just wondering if you can kind of summarize: what is DE{CODE} and what was your talk there?

Amber Hinds: Yeah, so DE{CODE} is of course WP Engine’s developer-focused conference, which is one of the few WordPress conferences that’s fully focused on developers, which is neat. 

And I gave almost like a five-minute lightning talk as part of the keynote presentation, which the whole of was about AI. And of course, me being an accessibility advocate, I spoke about AI’s impact on accessibility, good and bad.

Doc Pop: Yeah, and before large language models and generative AI were the hot new thing, there were many tools out there that claimed to easily fix accessibility issues and, in particular in the WordPress space, there was a lot of accessibility overlays or all sorts of tools that claimed to be quick fixes—just download this plugin and you’re all good.

And those were criticized often for not really fixing issues, for just giving the website owners the feeling that maybe they had done something, but not actually fixing things for users. Is AI likely to be the same, or is this going to be different for us?

Amber Hinds: Yeah, so I mean those, the accessibility overlays, you know, I think we almost talked about they’re criticized in the current tense, right? They are currently criticized because they make a lot of really bold claims. And the biggest challenge about accessibility is that not every problem can be detected automatically with an automated testing tool.

And so, if you can’t find all of the problems automatically, how can, you know, something come out and fix it? And that’s sort of what’s leveraged against the overlays. 

And I think to some degree, that is a challenge that AI models are having. So, you know, these large language models are trained off of millions of pieces of content—billions of pieces of content on millions of websites around the world. 

But the vast majority of websites have, if you look at the WebAIM report, they do a report every year called the WebAIM Million, where they scan the top million websites by—it was Alexa ranking—and check them for easily detectable accessibility errors. And 96 percent of them have easily detectable accessibility errors. 

So this becomes a problem because we are training our AI models on inaccessible code and inaccessible content. And, you know, we’ve all probably seen where ChatGPT—what do they call it, hallucination? It makes things up. And, you know, if you don’t give it the exact right prompt, it might give you just something that’s a little bit wrong.

And if you aren’t trained enough to know that, you might not catch it. And so unless you’re really specific, like if you’re using some of these tools like GitHub Copilot to help you code, you could potentially get out of it inaccessible code. For example, it might use divs instead of buttons because a lot of websites use divs instead of buttons. So I think that’s a challenge that we really have to figure out on the AI front.

Doc Pop: In the space we call this garbage in, garbage out. And in the context of AI, I think that’s often associated with when the models are trained on biased or incorrect data, then they’re gonna repeat that data as fact.

And in your presentation you mentioned, you know, Copilot is trained on sites that aren’t necessarily accessible. So if you’re using GitHub Copilot to help you build a site, it’s likely to repeat those errors. 

And I just—I was hoping to get an example of one, and you just mentioned one: buttons versus divs. Can you just quickly tell us about, like, why is that different? Why is that important to note?

Amber Hinds: Yeah, so one of the most important things for accessibility is using semantic HTML, which means HTML elements that have meanings in and of themselves that the browsers can interpret and do certain things with. 

So, when we talk about buttons on websites, there’s a couple of different kinds of buttons. In WordPress, we have the button block, which adds buttons, but they’re not actually buttons. They’re links that are styled to look like buttons. And then we have elements that control functionality, and these are true buttons. 

And so, these are things that might change a slide in a carousel or a slider. Something that might submit a form or something you can click to trigger an accordion to open and close. This would be a button. 

And in semantic HTML, we use a literal button tag. So it’s like, you know, the