"A child's learning is the function more of the characteristics of his classmates than those of the teacher." James Coleman, 1972

Sunday, July 27, 2025

AFT, Microsoft, OpenAI, and Public School Guinea Pigs

In 2023 the AFL-CIO signed an agreement with Microsoft that both parties hoped would benefit their respective organizations. The sixty unions that comprise AFL-CIO would get a promise of neutrality from Microsoft and its suppliers in the event that workers in any of those companies wanted to join a union. Microsoft also agreed to not interfere with union efforts to organize workers.  

Microsoft and companies where Microsoft has significant stake such as OpenAI would get access to those 60 unions and their members in order to proselytize for AI, create problems for AI to solve, and to train workers to use AI tools to solve them.

The first union to cash in on Microsoft’s new marketing strategy for its chatbot product line turns out to be the American Federation of Teachers (AFT), whose president, Randi Weingarten, has previously embraced Microsoft technologies developed and marketed for schools (see here and here and here and here.  

On July 8 of this year, AFT, Microsoft, and Anthropic announced a deal that will provide AFT with $23 million to open an AI training facility in New York City that will be the hub for a joint project to train 400,000 teachers nationwide:

The $23 million in combined support is structured as follows: Microsoft is contributing $12.5 million over five years, OpenAI is providing $8 million in direct funding plus $2 million in technical resources, and Anthropic is adding $500,000 in first-year support. This collaboration represents a significant commitment from the tech industry to ensure educators are central to the development of AI in education.

Unfortunately for America’s public school students, AFT’s and Microsoft’s deal to get ChatGBT into classrooms comes with unknown costs for the developing brains of students.  The research on the effects of AI on children and adults is sparse, and the full-speed-ahead approach of AFT is not borne out by the research that doesexist. 

We know, in fact, from a recently published study, Microsoft’s own researchers found that . . . "while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.”


The research was conducted with 319 adult knowledge workers, who we might expect would have a greater capacity to self-regulate than developing children and adolescents. Even so “overreliance on the tool” was noted as a significant concern.


The problem of A.I. mission creep is noted, too, by Yale poet and professor, Meghan O’Rourke, in a recent New York Times guest essay

Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn’t read. Then — tentatively — to help them outline, say, an essay on Nietzsche. The bot does this, and asks: “If you’d like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?”

At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps “just to see.” And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing.

Creepiest of all the research and reflections I have cited in this post is the recounting from Meghan ORourke’s guest essay for the New York Times.  After a month of experimenting and utilizing A.I. bots, she found herself under a spell cast by her new assistant, which left her soothed, yet uneasy:

A month in, I noticed a strange emotional charge from interacting daily with a system that seemed to be designed to affirm me. When I fed it a prompt in my voice and it returned a sharp version of what I was trying to say, I felt a little thrill, as if I’d been seen. Then I got confused, as if I were somehow now derivative.

In talking to me about poetry, ChatGPT adopted a tone I found oddly soothing. When I asked what was making me feel that way, it explained that it was mirroring me: my syntax, my vocabulary, even the “interior weather” of my poems. (“Interior weather” is a phrase I use a lot.) It was producing a fun-house double of me — a performance of human inquiry. I was soothed because I was talking to myself — only it was a version of myself that experienced no anxiety, pressure or self-doubt. The crisis this produces is hard to name, but it was unnerving.

What’s the likelihood that children could be negatively influenced or actually damaged by these bots? What’s the rush?  Oh, I almost forgot—you never pause or even walk during a gold rush.  Faster, faster, before someone corners the market. 

The difference between the current high tech fix for education and all the failed ones that came before is that this one has the potential to alter what it means to be human. Haven’t children been abused enough by Silicon Valley??

Thursday, July 24, 2025

Call to Action on Impeachment

Tell Congress to Impeach Trump

Take Action

  • Not in US? United States
    Loading

    The Fascist War on Diversity Now in Accreditation Business. Call to Action!!

    From the Academe Blog:

    BY MATTHEW BOEDY

    Every AAUP member should be concerned about the new accreditation group recently birthed in Florida. This week Louisiana became the seventh state to say it was joining.

    While only for now targeting states in the South the group’s business plan shows its strategy is to take on as clients public schools in any state. Its political strategy has been named by Governor Ron DeSantis as eliminating “woke” from schools. The governor of Louisiana said its goal was the ending of DEI-driven mandates. Its academic goals are to “streamline” accreditation so that institutions can innovate more quickly.

    Whatever the words by its supporters, it would be the state controlling higher education to such a degree as to merit comparison to the era when the AAUP was founded. Whatever the faults of the accreditation system and any specific outlet, the system has stood in the way of those who want to move fast and break things such as academic freedom and shared governance. 

    The Florida university system was the first to join the new group. I can only assume that as boards begin to meet for the academic year, other states that initially expressed interest like mine in Georgia will officially join. And soon thereafter comes a director and board members. From there the business plan suggests six schools in those states as its first clients. 

    The only obstacle may be the price tag. That business plan says each founding state will give $4 million or equal labor to get the effort off the ground and into a future where it can rely on accreditation fees. It may take up to two years for the new group to get federal recognition and the prize of federal funding for its accredited schools. But the pieces will be put in place this fall. The time for faculty to act is now.

    This is why I am calling on all faculty senates in public colleges across the South (and beyond) to pass resolutions disapproving of the new group and urging its administration and overseers at all levels not to join.

    This kind of effort must be networked and done with speed. Fortunately for traditionally slow moving senates we have a model of a quickly developing process in the Mutual Defense Compact that flew through this spring in mainly Big Ten schools. There must be SEC and ACC and Big Twelve versions.

    I took the liberty of revising that initial resolution for this new use. Feel free to revise and use as you see fit:  

    Faculty Statement Against Political Power Grab of Accreditation  

    WHEREAS recent and escalating politically motivated actions by governmental bodies pose a significant threat to the foundational principles of American higher education, including the autonomy of university governance, the integrity of scientific research, and the protection of free speech;

    WHEREAS state governments across the South and aligned political actors at the federal level have targeted the independence of institutional accreditation with legal, financial, and political incursion designed to undermine the accrediation process and exert improper control over academic administration;

    WHEREAS these state governments have routinely undermined faculty governance and academic freedom with state laws and policies;

    WHEREAS these same state governments will choose the board members of the Commission for Public Higher Education and that board will chose a director, offices that will direct the commission; 

    WHEREAS these same state governments will financially invest in the commission up to $4 million;

    WHEREAS that money and influence will corrupt the independent accreditation process and allow states to steer colleges and universities toward policies, curriculum, hiring standards, and institutional benchmarks that will undermine higher education’s role in our democracy;

    THEREFORE, be it RESOLVED that this faculty senate urges the administration of this school and its state higher education system leaders to formally declare it won’t become a client of the commission as long as it is overseen by state governments.  

    Matthew Boedy is president of the Georgia AAUP. He can be reached on X/Twitter and Bluesky. 

     

    Monday, July 21, 2025

    Twenty Years Ago Today

    On July 21, 2005 I started a blog called Schools Matter. And just a few days ago, we crossed the 20 million views threshold. 

    Twenty years later we are still plagued by high stakes tests and the corporate ed reform agenda. Added to these challenges, education as an idea and an institution is under attack as fascists attempt to dismantle all democratic institution and values, including facts, truth, science, art, speech, and intellectual and body freedom. 

    My thanks to all the writers, scholars, and bloggers who, too, thought schools mattered during the past two decades: Susan Ohanian, Judith Ann Rabinowitz, Peter Campbell, Kenneth Libby, Paul Thomas, Stephen Krashen, Doug Martin, Ken Derstine, and someone I know I’ve forgotten (please remind me).  Thank you.



    Sunday, July 20, 2025

    The Epstein Cover-Up at the FBI by Allison Gill

     

    The Epstein Cover-Up at the FBI by Allison Gill

    Inside the chaotic review process of the Epstein and Maxwell files at the Bureau

    Read on Substack

    Tuesday, July 15, 2025

    MIT Research on Student Use of AI Exposes Serious Cognitive and Behavioral Concerns

    Slate published an important article today that explores what it's like for college students who resist the short-cuts to learning that artificial intelligence provides. It’s well worth reading and should give pause to the race to embrace these flawed robots that are erasing humans from a growing list of occupational fields, while debasing the cognitive growth of students of all ages who are allowed or encouraged to use these little-understood technologies. 

    Among the many links provided in the piece, I followed this one to an MIT study published last month that scientifically compared neural and behavioral functioning among groups of students who engaged in essay writing with and without computer assistance.  Here is the abstract (my bolds):

    This study explores the neural and behavioral consequences of LLM-assisted essay writing. [LLM refers to Large Language Models, i.e., AI tools]. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

    And here are the researchers' conclusions:

    As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research.


    The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM's output or ”opinions (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders [123, 125].


    Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking.

    Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to   other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from their essays (Session 1, Figure 6, Figure 7).


    Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group.


    We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

    The Privatization/Censorship/Indoctrination Agenda Is in Full Bloom

    Thursday, July 10, 2025

    Public Assistance for the Privileged

    Indiana Vouchers: Private School Coupons for Wealthy Families by Andy Spears

    Program costs nearly $500 million, funds private school discounts for the rich

    Read on Substack

    Thursday, July 03, 2025

    Trump Sycophant, Andy Ogles, Still Under Investigation

    Investigative reporter, Phil Williams, has been digging into the sordid story of the “George Santos of the South” for a long time, now.