Kids teaching robots: Is this the future of education?
This article was written by Chris Berdik
Avatars, robots, elves are being invented to become virtual pupils, to help students learn by teaching
What do a precocious computer elf, a math-loving avatar and a robot with terrible handwriting have in common? They’re all digital spins on the educational theory of learning-by-teaching.
Decades of studies have shown that students learn a subject better when asked to help another learner. Traditionally, this meant taking the time to pair off students into peer-tutoring arrangements. Now, education researchers at about a dozen universities around the world are trying to supercharge the idea with technology. They’re creating virtual learners that need a human student to teach them everything from history to earth science.
Unlike real students, these “teachable agents” don’t get embarrassed or frustrated when they don’t know something. They respond reliably to good teaching without making random or silly mistakes, and their impact on learning can be precisely tracked and measured. Within a decade, teachable agents could be a classroom mainstay, researchers estimate. For now, most remain creatures of the lab, encountering real classrooms only in pilot studies.
While learning-by-teaching can be automated, the benefits are not automatic. Making a really effective digital learner isn’t simple.
“There’s not really just one reason why learning-by-teaching works so well,” said Daniel Schwartz, an education professor at Stanford who leads a lab developing teachable agents. “It’s a happy confluence of forces that help students learn. There’s a lot going on.”
For instance, having students teach pushes them to think about a topic’s underlying concepts and connections in order to gauge what another student knows and to build on that understanding. To boost this meta-cognitive effect, Schwartz and a team at Vanderbilt, led by electrical engineering and computer science professor Guatam Biswas, made a teachable agent for science lessons that would display every step of its thought process on screen as it learns. They called it Betty’s Brain.
“With Betty, you get to see how she figures things out, based on what you’ve taught her. It teaches students how to reason through chains of ideas.”
Daniel Schwartz, education professor at Stanford University
To teach Betty about ecosystems, for example, a student builds a map of ecosystem knowledge in her “brain” by linking together words — such as various plants, animals and nutrients — with lines indicating specific kinds of relationships (this eats that, or this causes that, etc.). Gradually, Betty’s brain becomes filled with an on-screen diagram of systems such as food webs, water flows and nutrient cycles. And when Betty is quizzed by another avatar named Mr. Davis, the words and the links between them are highlighted in a sequence as she considers her answers.
“With Betty, you get to see how she figures things out, based on what you’ve taught her,” said Schwartz. “It teaches students how to reason through chains of ideas.”
When Betty messes up, you can see precisely which connections led her down the wrong path. To help debug a faulty brain, the student can click into Mr. Davis’s online library of scientific information — the sort of background support, or what educators call scaffolding, that teachable agents need to be effective. In pilot studies, students in science classes that used Betty’s Brain during a semester not only learned the curriculum better than students who didn’t, they were also better able to use scientific reasoning in a separate assessment.
Another way to spur meta-cognition and deeper learning might be to make an agent that occasionally disagrees with its teacher, even when it’s wrong. That’s the idea behind Time Elf, an ambitious sprite who is trying to supplant the retiring “Guardians of History” in a game made by the Education Technology Group at Sweden’s Lund University. From time to time, the Time Elf will challenge the student’s grasp of history, like this: “I think you have the date wrong. Are you sure?”
Time Elf is still in the early stages of development. In an initial study published last year, students teaching the elf were too willing to change their answers if he objected, even when their original answer was correct. In the next iteration, the researcher will make sure the students understand that the elf doesn’t know anything until they teach it to him, and they’ll alter the elf’s language so he doesn’t seem so sure of himself.
The study did confirm that students found the challenging the elf more fun and engaging to teach than a completely acquiescent elf, according to Agneta Gulz, a Lund University professor of cognitive science.
“That’s an input we had from students. They wanted the agent to have more personality,” she said. “We are trying to build more realism into that interaction.”
The level of engagement is important, because another reason students learn by teaching is the so-called protégé effect. Namely, acting as a teacher makes students feel responsible for their tutee’s learning, leading them to be more persistent in covering the material than they would be alone.
With digital protégés, “if students care more about the agent, then they’ll be more engaged, and so they’ll learn more,” said Noboru Matsuda, a scientist who studies human-computer interaction at Carnegie Mellon University. Matsuda helped create a teachable agent called SimStudent that learns algebra. Students name their SimStudents and customize their look, hair and wardrobe before preparing them for increasingly difficult algebra quizzes.
Pilot studies with middle school math students show that caring about SimStudent’s success improves student learning, but only up to a point. Recently, Matsuda and his team tried to up the ante on engagement by encouraging kids to challenge each other’s SimStudents in “game shows” where winners and losers would move up or down in the SimStudent rankings. It didn’t work.
“Kids love to win,” said Matsuda. But rather than trying to win by tutoring their SimStudent more attentively, the students simply challenged weaker opponents to gain points. “They found a strategy that focused on the game show and not on the teaching,” said Matsuda.
The flipside of success, of course, is failure. And the third way teachable agents can help students is by making academic struggles a little less personal, which makes students more willing to keep trying when the learning gets tough. One case in point is a little humanoid robot called the CoWriter, jointly developed by Portugal’s Instituto Superior Técnico and the Computer-Human Interaction in Learning and Instruction (CHILI) lab in Switzerland’s École Polytechnique Fédérale de Lausanne.
Studies show that young kids who initially struggle with handwriting get easily discouraged, leading them to avoid much-needed practice. So, the CoWriter’s creators made a robot with horrible handwriting (its letters are badly deformed in specific ways according to a shape-shifting algorithm) that asks a young student for help. The student writes practice letters and shows them to the CoWriter who gamely tries to copy them. Gradually, via machine learning, the robot is able to improve its letters based on the student examples.
“One hope of this project is that by making the child into the teacher, students who felt they weren’t capable of writing can recover their self-confidence,” said Séverin Lemaignan, a post-doctoral researcher with CHILI. “Triggering a change in mindset may be enough to turn them into much better writers.”
This story was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Read more about blended learning.
Chris Berdik is a science journalist who has written about a wide variety of topics, including the intersection of science with ethical issues and the peculiarities of the human brain.