"A child's learning is the function more of the characteristics of his classmates than those of the teacher." James Coleman, 1972

Tuesday, July 15, 2025

MIT Research on Student Use of AI Exposes Serious Cognitive and Behavioral Concerns

Slate published an important article today that explores what it's like for college students who resist the short-cuts to learning that artificial intelligence provides. It’s well worth reading and should give pause to the race to embrace these flawed robots that are erasing humans from a growing list of occupational fields, while debasing the cognitive growth of students of all ages who are allowed or encouraged to use these little-understood technologies. 

Among the many links provided in the piece, I followed this one to an MIT study published last month that scientifically compared neural and behavioral functioning among groups of students who engaged in essay writing with and without computer assistance.  Here is the abstract (my bolds):

This study explores the neural and behavioral consequences of LLM-assisted essay writing. [LLM refers to Large Language Models, i.e., AI tools]. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

And here are the researchers' conclusions:

As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with LLM integration in educational and informational contexts. While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking, and intellectual independence demands a very careful consideration and continued research.


The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM's output or ”opinions (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders [123, 125].


Only a few participants in the interviews mentioned that they did not follow the “thinking” [124] aspect of the LLMs and pursued their line of ideation and thinking.

Regarding ethical considerations, participants who were in the Brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to   other groups. Essays written with the help of LLM carried a lesser significance or value to the participants (impaired ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide a quote from their essays (Session 1, Figure 6, Figure 7).


Human teachers “closed the loop” by detecting the LLM-generated essays, as they recognized the conventional structure and homogeneity of the delivered points for each essay within the topic and group.


We believe that the longitudinal studies are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans.

No comments:

Post a Comment