AI and Inclusion: Why the Training Gap Is Leaving Educators Behind

How AI is both powerful and a tool, when not trained well on, can promote exclusion.

LEARNING OUTCOMESINLCUSIONTECHNOLOGYINCLUSIVE LANGUAGEACCESSIBILITYARTIFICIAL INTELLIGENCE

Rocco Catrone, PhD, BCBA-D, IBA, CPACC

3/12/20263 min read

purples and dark pinks visual for the AI and Inclusion Article.
purples and dark pinks visual for the AI and Inclusion Article.

If you have interacted with me at any point over the last six months, I have probably talked about AI, especially around AI and accessibility. These tools have been incredibly powerful for me as an individual with cognitive disabilities to do things like executive functioning or making sense of the mess of information that I give it. Now, these tools have not necessarily made it faster for me to process information, but more efficient. I am still spending a good amount of time putting the information in, but the production of ideas afterwards is really powerful. AI is an incredible tool that I wish I had as I was going through school, instead of going through intense, inequitable systems for individuals with disabilities. I think this could have saved me a lot of stress if these tools had existed when I was coming up through my education.

However, and this is a big however, the speed at which AI has been rolled out is incredibly dangerous. There are so many things that can be done at scale, but the level of training needed to do this ethically and inclusively is not matching anywhere near the level of expectation that companies, administrations, and school systems are placing on their people to roll this out.

When technologies like this are implemented in an effective way with proper training, they can really close major gaps in individual repertoires of learning and completing jobs, supporting many different learning styles. Our educators and employees are not getting the time to figure out how these things can be implemented well. Educators at all levels are fighting what seems to be a lack of critical thinking when AI is involved, and tend to lean more toward policing rather than leaning into a new tool that can be powerful. Employers are threatening, not unfounded, to reduce workforces because AI can fill a lot of technical gaps. This presents fear, not only of falling behind the times, which is dangerous in any sector, but also of slowly moving a locus of control and power to a machine. How long will it be before this Black Mirror episode we are living in real time shifts to something where machines actually rule the world?

We have to focus on the humans pushing that technology. The systems we operate in are created by humans and are thus subjected to the bias of those humans making the system. So if we do not take the time to build these things with inclusivity in mind, or critically analyze the inputs and outputs of the technology we are using, perhaps we are rocketing forward to a dystopian future that ultimately makes humans obsolete.

A recent article by Matt Schumer, which has been gaining an incredible amount of attention, shares a lot of these same concerns, and people are taking this wholesale and freaking out. A lot of these concerns are not unfounded, so it makes sense that people see their livelihoods and hard-earned career paths at risk.

What do we do about this?

Codifying policies, including checks and balances on how AI tools are being rolled out, and leaning on the lived experience of the individuals who are supposed to be utilizing these tools, with many iterative checks along the way, will help prevent, or at least slow down, this dystopian future. Simply pointing at something and saying it is wrong without actually doing the work to change it, even on small levels in our daily lives, allows these things to perpetuate. I am not just talking about the implementation of AI, but that is a story for another time.

I challenge each and every person reading this to sincerely analyze the way in which they are using this technology. Look at who it is affecting in their communities, the power (literal raw energy) it takes to run AI systems and server farms, and the actual effect of implementing AI considerations and technologies with the people they are working with. If an intentional process is not put in place to create these checks and balances, or at the very least to help people realize how this tool is actually affecting their workflow, we are creating our own singularity. So instead of sitting and pointing fingers at a technology that has permeated every aspect of our lives, saying that it is bad, or swinging to the opposite extreme and saying everybody should be using it, we should use it like the tool that it is. AI is a probability generator based on human data. Much like the calculator did not decrease people's ability to critically think about mathematics, we need to think about AI implementation in the same way. How do we utilize this powerful tool to help build repertoires of critical thinking, while also being aware of the ways in which it is affecting everyday life?

If this is something you are interested in talking more about, please reach out to me directly. I would love to hear your ideas around this topic so that we can share this information publicly and create a community of critical AI literacy and implementation.

Email us at info@techinclusionpro.com