AI; or, What Professors Can Learn from Chess
Now a writer and university professor, Iheoma Nwachukwu played professional chess in Nigeria for ten years. Here he considers what chess’s more rigorous contact with AI can teach professors grappling with the spark between students and AI.
In 1769 Wolfgang von Kempelen, a Hungarian author, assembled the first AI chess player—a figure hewn out of maple wood. This mannequin, called the Mechanical Turk, defeated players with crackerjack résumés while perched beside a large wooden cabinet. It sported a rugged mustache and donned a turban wrapped around a fez. Von Kempelen toured Europe to show off his device. Mr. Turk went on to beat Benjamin Franklin in 1783 and Napoleon Bonaparte in 1809. Although the machine turned out to be a hoax (there was a pony-sized human inside the cabinet who played the moves, sliding from side to side on a rolling chair), von Kempelen’s antics arguably mark the onset of chess’s romance with AI.
Chess boasts a more rigorous contact with artificial intelligence compared to the academic field, which suffered a faltering of AI enthusiasm in the 1970s. Chess benefited immensely from the 1957 Bernstein program, the 1967 Greenblatt program, the 1970 Soviet Kaissa, the 1978 Belle, the 1983 Cray Blitz, the 1985 HiTech, the 1991 Fritz, and the 1997 Deep Blue, down to today’s parade of grandmaster-crushing engines. Skepticism did follow the advance of these programs. Early versions were feeble, falling easily to any yardbird. Additionally, many players at the time believed the sport was an intrinsically human pursuit depending solely on intuition and natural ability.
But as chess programs rallied, becoming too strong for any human to beat, especially in the 2000s, attitudes shifted to acceptance and the recruitment of these programs for training. Today’s players jab with AI to coax themselves out of mediocrity and ignorance. To teach, not cheat. (This isn’t saying AI hasn’t been employed by dirty players to chizzle competition, especially in online chess.) This use of AI as a pathfinder, a Dutch uncle, is the perspective I think is absent in how academia grapples with the spark between students and AI.
Instead of slapping the lid on AI use, perhaps we ought to be asking students: What have you learned from AI? We might nudge them toward seeing AI as a collaborative tool, not one for trickery. College kids can be encouraged to utilize AI for their papers, only in order to pore over AI’s output and glean what they can from the program’s work (structure, word choice, angle, bias) and how they might improve their own work from the notes they’ve taken.
Instead of slapping the lid on AI use, perhaps we ought to be asking students: What have you learned from AI?
I’m not suggesting that teachers ignore intentional fraud. The use of AI to prepare assignments disguised as human work should merit penalties. What students need is guidance. Many are at an impressionable age or under a great deal of pressure—especially those who work after classes. They can use AI to accelerate their own learning.
Some in academia might scoff at this prospect. After all, when computers were first introduced into the classroom, teachers eyed them with horror. There were fears of job displacement, insufficient time commitment, pedagogical bias, and education dehumanization. Today’s teachers need not go through the five stages of grief before they accept what’s coming. Students are never letting go of ChatGPT or Grammarly. We have to adapt.
When computers were first introduced into the classroom, teachers eyed them with horror.
In my experience as a college professor, I have come across students who, despite receiving caution in earlier assignments for AI use, continue to do so, as though I hadn’t issued them a prior admonishment. Finding themselves on the brink of failure, such students clear out a week before finals. Other professor-acquaintances note the same pattern.
Cognitive debt is often cited as one of the drawbacks of allowing students to interact with AI. While this effect has been proven by recent research, teachers might counter it by adjusting syllabi so that classroom activity is more involved (classroom participation points should go up significantly), with challenging questions thrown at students—while students are encouraged to unpack their own thinking process, to publicly share the pathway to their answers. Classroom debates can also be organized, as well as “devil’s advocate” exercises.
If we’re able to initiate this turnaround in how students engage with AI, what about services like Grammarly that target their products at colleges directly, vowing to help students rewrite their sentences (and provide outlines for essay topics, though students rarely use Grammarly for that), touting packages that promise to elevate students’ writing, give them better grades, make them look good in front of their professors and peers? What would be the consequences of exposing students to this messaging while we fight to reorient their attitude in the classroom? Will we end up with an army of befuddled, entitled, or slighted students? And will we respond by fighting harder? Will our labor resemble that of Sisyphus, heaving up a hillside only to watch all our hard work roll away?
I’ve had frightening fantasies of clunky, computer-powered Mechanical Turks overrunning college campuses (like the reanimated mummies in Dr. Who’s Pyramids of Mars), snatching teachers’ jobs in the classroom, redesigning courses, and automating administrative tasks to free other robots for academic research. Isn’t this what they say AI will do in the long run? Distort our reality—whether what we perceive as reality is real or a Berkeleyan horror. Perhaps a Mechanical Turk, being not a human Turk but a mechanical one, would react to students’ AI use by withholding judgment, offering encouragement, then directing young scholars to collaborate with artificial intelligence—instead of shushing students like we are wont to do? Perhaps a Mechanical Turk would build a rival to Grammarly, without the sentence rewrite feature, and offer it pro bono to students? All provocative thoughts.
We leap toward the future like Wukong. Forty or fifty years from now, what’s heresy will become convention. And adopting AI in the classroom isn’t accepting change for change’s sake. It is survival on our own terms. It is shaping technology used in the classroom. Students have made the first move, and we must attempt to respond intelligently.
Eastern University