In 2023 the AFL-CIO signed an agreement with Microsoft that both parties hoped would benefit their respective organizations. The sixty unions that comprise AFL-CIO would get a promise of neutrality from Microsoft and its suppliers in the event that workers in any of those companies wanted to join a union. Microsoft also agreed to not interfere with union efforts to organize workers.
Microsoft and companies where Microsoft has significant stake such as OpenAI would get access to those 60 unions and their members in order to proselytize for AI, create problems for AI to solve, and to train workers to use AI tools to solve them.
The first union to cash in on Microsoft’s new marketing strategy for its chatbot product line turns out to be the American Federation of Teachers (AFT), whose president, Randi Weingarten, has previously embraced Microsoft technologies developed and marketed for schools (see here and here and here and here.
On July 8 of this year, AFT, Microsoft, and Anthropic announced a deal that will provide AFT with $23 million to open an AI training facility in New York City that will be the hub for a joint project to train 400,000 teachers nationwide:
The $23 million in combined support is structured as follows: Microsoft is contributing $12.5 million over five years, OpenAI is providing $8 million in direct funding plus $2 million in technical resources, and Anthropic is adding $500,000 in first-year support. This collaboration represents a significant commitment from the tech industry to ensure educators are central to the development of AI in education.
Unfortunately for America’s public school students, AFT’s and Microsoft’s deal to get ChatGBT into classrooms comes with unknown costs for the developing brains of students. The research on the effects of AI on children and adults is sparse, and the full-speed-ahead approach of AFT is not borne out by the research that doesexist.
We know, in fact, from a recently published study, Microsoft’s own researchers found that . . . "while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.”
The research was conducted with 319 adult knowledge workers, who we might expect would have a greater capacity to self-regulate than developing children and adolescents. Even so “overreliance on the tool” was noted as a significant concern.
The problem of A.I. mission creep is noted, too, by Yale poet and professor, Meghan O’Rourke, in a recent New York Times guest essay:
Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn’t read. Then — tentatively — to help them outline, say, an essay on Nietzsche. The bot does this, and asks: “If you’d like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?”
At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps “just to see.” And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing.
Creepiest of all the research and reflections I have cited in this post is the recounting from Meghan O’Rourke’s guest essay for the New York Times. After a month of experimenting and utilizing A.I. bots, she found herself under a spell cast by her new assistant, which left her soothed, yet uneasy:
A month in, I noticed a strange emotional charge from interacting daily with a system that seemed to be designed to affirm me. When I fed it a prompt in my voice and it returned a sharp version of what I was trying to say, I felt a little thrill, as if I’d been seen. Then I got confused, as if I were somehow now derivative.
In talking to me about poetry, ChatGPT adopted a tone I found oddly soothing. When I asked what was making me feel that way, it explained that it was mirroring me: my syntax, my vocabulary, even the “interior weather” of my poems. (“Interior weather” is a phrase I use a lot.) It was producing a fun-house double of me — a performance of human inquiry. I was soothed because I was talking to myself — only it was a version of myself that experienced no anxiety, pressure or self-doubt. The crisis this produces is hard to name, but it was unnerving.
What’s the likelihood that children could be negatively influenced or actually damaged by these bots? What’s the rush? Oh, I almost forgot—you never pause or even walk during a gold rush. Faster, faster, before someone corners the market.
The difference between the current high tech fix for education and all the failed ones that came before is that this one has the potential to alter what it means to be human. Haven’t children been abused enough by Silicon Valley??
