Can AI Foster Equity in Education?

Across the UMD College of Education, Students and Faculty Are Exploring How to Harness AI to Create More Equitable Opportunities for All Learning Levels. The Key? Itݮýs in the Way that You Use It.
Endeavors 2024 cover

Artificial intelligence (AI) has been called many things, but ݮýequalizing forceݮý probably doesnݮýt leap to mind for most people. Then again, most people donݮýt have the same life experiences as ݮý graduate student Muhammad Fusenig, who temporarily lost his ability to read, write and understand language after suffering hemiplegic migraines in his early 20s. 

ݮýThere were many times when I could not adequately express myself or find the words that I knew I had the capacity for,ݮý said Fusenig, who is pursuing a Ph.D. in educational psychology. ݮýIt was an incredibly limiting experience.ݮý 

As an undergrad who studied political science at the University of California, Davis, Fusenig initially aspired to a career that would draw on the very skills he was struggling with: speaking, writing and reading. But as he began having frightening stroke-like symptoms shortly after undergoing surgery for a spinal injury, his focus shifted to understanding the mechanisms behind language and how AI-based language-processing tools might help him and others experiencing similar difficulties. He developed an assistive writing program to aid him in finding the right words when his brain wouldnݮýt deliver them. 

His interest in AI brought him across disciplinesݮýand across the countryݮýto study the practical and theoretical aspects of artificial intelligence at the UMD College of Education. ݮýThereݮýs a lot of moralizing that goes on with [AI],ݮý said Fusenig, who is evaluating how college students are using his AI-based software and what their motivations are. ݮýBut we don't really know why somebodyݮýs using it in school or out of school,ݮý he added. ݮýTo me, AI seems like a very equalizing force.ݮý

Fusenig is part of a small team in UMDݮýs , headed by Patricia A. Alexander, Distinguished University Professor and world expert on text-based learning, knowledge development and reasoning. His story underscores that AI, like so many transformative technologies, cannot be reduced to a single word like good or bad.

In other words, peopleݮýs common fears that AI can be used for cheating and learning shortcuts are true. Yet equally true is the reality that AI can help level the playing field for historically excluded students or empower teachers in marginalized communities with effective new tools they wouldnݮýt be able to access otherwise. 

ݮýContext is everything,ݮý explained Alexander. ݮýAny form of AI has its pros and its cons, and one of the things we have to be sensitive to and aware of is, how do we prepare students to use those devices in a way that proves them to be extremely facilitative?ݮý 

Alexander and Fusenig are among the many experts at the UMD College of Education looking to harness the power of AI to foster equity at all levels of education. While some faculty and students are focused on providing fairer learning experiences for historically excluded students in STEM (science, technology, engineering and math), others are exploring how to expand access to training among K-12 teachers. What they all have in common is a nuanced understanding that, given the explosive growth of AI, itݮýs incumbent on educators to lead the way to its responsible integration in learning spaces everywhere.

Generative AI to increase studentsݮý confidence and belonging

For David Weintrop, associate professor with a joint appointment in the College of Educationݮýs Department of Teaching and Learning, Policy and Leadership, and the College of Information, studying AI in the context of computer science was a logical choice. ݮýPeople are often surprised at how incredibly powerful generative AI is when it comes to writing code,ݮý he said. ݮýYou can give tools like ChatGPT very high-level, abstract prompts, and they can produce functioning code that can do lots of things that otherwise would take a very long time.ݮý 

Weintrop is coleading a project with Assistant Professor Joel Chan in UMDݮýs College of Information to evaluate whether the widespread availability of large language models like ChatGPT is helping or hindering college students from historically excluded backgrounds (those who are Black, Indigenous or people of color; women or nonbinary; English language learners; or first-generation college students) in learning introductory programming. The project is funded by a $60,000 grant from Google and a $50,000 grant from UMDݮýs Teaching and Learning Transformation Center.

Weintrop and the team want to understand how these learners perceive generative AI. On one hand, they theorize that the students may find these tools build their confidence by augmenting learning. On the other, the researchers also think itݮýs possible the learners could find generative AI alienating. For example, if ChatGPT can enable good grades without learning, historically excluded students might experience an amplified sense of imposter syndrome. They may feel like, ݮýI'm just faking it and using these other tools, and I still don't know what's going on,ݮý Weintrop said. 

In Spring 2024, the team collected baseline data from students in introductory computing classes. In the fall, theyݮýre rolling out the same course, but this time integrating generative AI tools into the design of the class. Ultimately, theyݮýll measure the impact of the large language models on the studentsݮý interest, confidence, self-efficacy and sense of belonging. 

Although the study wonݮýt conclude until December, Weintrop has already made some intriguing observations, noting that most students donݮýt want to use AI as a shortcut to learning. ݮýThey wanted to develop a deep conceptual understanding and fundamental proficiency in learning to program,ݮý he said.

But that doesnݮýt mean they canݮýt benefit from generative AI at all. ݮýStudents are going to be graduating into a world where those tools are accessible,ݮý Weintrop said. ݮýSo it makes sense to me that they learn how to use them in responsible and rigorous ways as opposed to just pretending they don't exist.ݮý

Janet Shufor Ph.D. ݮý24ݮýwho recently earned her doctorate in teaching and learning, policy and leadership, specialization in technology, learning and leadership, and worked in Weintropݮýs groupݮýexplored similar questions about the impact of generative AI on historically excluded K-12 students learning to code. She found that even younger learners were largely able to self-moderate their use of AI. ݮýThey don't rely on the tools to write the code because theyݮýre aware they have to learn to code themselves,ݮý she noted.

Natural language processing to guide educators

Natural language processing (NLP) is a component of AI that allows a computer program to understand human language as itݮýs spoken or written. In his project, which is funded by a from UMD, Assistant Professor Jing Liu and his team are using NLP to analyze transcripts from K-12 mathematics classrooms and provide specific, actionable feedback to help teachers improve. For example, theyݮýre exploring how often the educators interact with students or take up their ideas. 

ݮýI generate the feedback as a way [to guide] teachersݮý professional learning, so that, for schools and classrooms that don't have a lot of resources, and teachers who don't have a lot of support in terms of their instructional practices, they can at least rely on AI tools,ݮý Liu said. ݮýI think that's just another way to think about equity.ݮý 

Liu was a member of the faculty group that helped launch the campuswide Artificial Intelligence Interdisciplinary Institute at ݮý (AIM) in April, which supports faculty research, learning opportunities and advances in ethical AI.

One of his teamݮýs initial observations is that NLP alone appears to be less helpful to educators than NLP combined with human input. Liu noted that teachers who were given AI-based analyses alone generally found it challenging to interpret the feedback and werenݮýt particularly motivated to do so. However, when combined with human coaching, NLP seemed to be a very helpful training tool. 

In fact, Liu recently received a grant from the Overdeck Family Foundation to conduct a study that will assess the impact of combining human coaching with automated feedback. 

He is also leading an effort jointly supported with $4.5 million from the Bill & Melinda Gates Foundation, the Walton Family Foundation and the Chan Zuckerberg Initiative to create high-quality benchmark data on math teaching from diverse upper elementary and middle schoolsݮýinformation that is currently lacking, he said. ݮýFrom a technical perspective, when you develop AI models, the initial data you use to train the models has to be pretty representative, so that your downstream model and applications can be more unbiased,ݮý he added. To address that, Liu is collecting a wealth of information from demographic surveys and other sources about studentsݮý test scores, sense of belonging, perceptions of mathematics and more. The data will be used to train subsequent AI models using the highest-qualityݮýand most equitableݮýinformation possible. ݮýSo rather than just ݮý developing more AI models, letݮýs take a step back and create the best data first,ݮý he explained.  

AI-based ݮývirtual studentsݮý to enhance teacher training

When Fengfeng Ke joins the UMD College of Education as a new Clark Leadership Chair in January 2025, she says that one of the questions she wants to explore is: ݮýHow can we train the teachers ݮý to help the classroom to become more equitable or more inclusive and more personalized in general?ݮý Ke will bring to ݮý a deep background in game-based learning, immersive learning, computer-supported collaborative learning and the inclusive design of e-learning. 

Her research looks at using AI-powered ݮývirtual studentsݮý as part of educatorsݮý preservice training protocols, with the goal of creating an accessible, scalable teaching simulation that can augment in-person practicums, she said. A National Science Foundation award of approximately $600,000 supports this work.

The virtual students draw on large language models to help educators practice interacting with a diverse group of students. The technology offers a more sophisticated training modality than traditional role-playing exercises that use fixed decision trees, which are flowchart-like tools that map the outcomes of particular courses of action. 

Ke is particularly interested in helping teachers work more effectively with neurodiverse populations, including students with autism, although she does not yet have access to the dataset needed to evaluate an AI-based intervention. Like Liu, she noted that high-quality data from diverse populations are needed (in her case, data representative of neurodiverse individuals) before further AI models can be developed to benefit that group. ݮýItݮýs a critical bottleneck right now,ݮý she said. But eventually, she hopes to explore whether simulations enhanced by generative AI can be empowering tools for both neurodiverse students and the teachers, caregivers and healthcare providers who interact with them.

The role of educators and students

As a renowned thought leader on AI in education, Alexander regularly consults with faculty in engineering, medicine, writing and other disciplines, in addition to teaching students. She emphasizes that everyone has a role to play when it comes to ensuring the technology is used responsibly. That begins with understanding that AI should never be used as a substitute for oneݮýs own thinkingݮýwhich defeats the purpose of learning. For that reason, Alexander asks her students to first master tasks and concepts without using technology.

ݮýBut once they have acquired some fundamental skills, augmenting those with AI is very helpful to them,ݮý she added. ݮýWe all look up words. We all do things like that all the time, even if we consider ourselves to be proficient.ݮý

In addition, educators have a responsibility to talk about the AI elephant in the classroom by providing thoughtful guidance to students on how to use the technology effectively while acknowledging its shortcomings. That includes reinforcing that AI is only as inclusive as the data it draws upon, which is frequently lacking in that respect. 

Yet, as the critical work being done by UMD College of Education faculty and students highlights, AI also has tremendous potential to create fairer, more individualized educational experiences. You might even call it an equalizing force.

Illustration by Jeannie Phan