Recently, North Carolina State University researchers looked at how students behaved when playing Crystal Island, an educational game. Using an artificial intelligence (AI) model that uses the machine learning training concept multi-task learning (formerly known as “hints”), they can now predict how much students are learning in educational games. This information could be used to improve both instruction and learning outcomes.
Multi-task learning is an approach in which one model is asked to perform multiple tasks at the same time, while exploiting commonalities and differences across task.
“In our case, we wanted the model to be able to predict whether a student would answer each question on a test correctly,” said Jonathan Rowe, co-author of a paper on the work and a research scientist in North Carolina State University’s Center for Educational Informatics (CEI). “The standard approach for solving this problem looks only at overall test score, viewing the test as one task,” Rowe says. “In the context of our multi-task learning framework, the model has 17 tasks — because the test has 17 questions.”
The researchers had gameplay and testing data from 181 students. The AI could look at each student’s gameplay and at how each student answered Question 1 on the test. By identifying common behaviors of students who answered Question 1 correctly, and common behaviors of students who got Question 1 wrong, the AI could determine how a new student would answer Question 1.
This function is performed for every question at the same time; the gameplay being reviewed for a given student is the same, but the AI looks at that behavior in the context of Question 2, Question 3, and so on.
And this multi-task approach made a difference. The researchers found that the multi-task model was about 10 percent more accurate than other models that relied on conventional AI training methods.
“We envision this type of model being used in a couple of ways that can benefit students,” says Michael Geden, first author of the paper and a postdoctoral researcher at NC State. “It could be used to notify teachers when a student’s gameplay suggests the student may need additional instruction. It could also be used to facilitate adaptive gameplay features in the game itself. For example, altering a storyline in order to revisit the concepts that a student is struggling with.
“Psychology has long recognized that different questions have different values,” Geden says. “Our work here takes an interdisciplinary approach that marries this aspect of psychology with deep learning and machine learning approaches to AI.”
“This also opens the door to incorporating more complex modeling techniques into educational software — particularly educational software that adapts to the needs of the student,” says Andrew Emerson, co-author of the paper and a Ph.D. student at NC State.