Study Reveals Large Language Model Usage May Impair Learning Abilities

2025-06-20

MIT researchers have uncovered potential cognitive impacts from relying on large language models during learning processes.

A recent study published as a preprint paper reveals the findings of a four-month investigation into how different writing approaches affect cognitive development. The research team conducted multiple controlled experiments with participants in the Boston area.

The experimental design involved 54 participants divided into three groups for 20-minute writing tasks. One group completed assignments without external assistance, another used search engines, while the third leveraged ChatGPT for content generation. This comparative approach was repeated four times, with the final session occurring four months after the initial tests.

"While immediate benefits were evident, participants utilizing large language models consistently underperformed in neural processing, linguistic quality, and overall evaluation metrics compared to those relying solely on cognitive resources," the researchers documented in their methodology.

Cognitive activity was measured using EEG headsets that monitor cerebral interactions through electrode arrays. Researchers combined this neurological data with verbal assessments to comprehensively analyze cognitive workload.

The study employed dynamic directed transfer function (dDTF) connectivity measurements to quantify brain region interactions. Researchers observed a 55% reduction in dDTF connectivity among LLM users during writing tasks compared to unaided writers. This suggests significantly diminished neural coordination when AI assistance is employed.

Analysis of theta wave activity in the medial frontal cortex revealed additional insights. These brainwaves, crucial for sustained attention, showed reduced activation in LLM-assisted participants. "Significant theta connectivity patterns evident in unaided writers were notably absent in large language model users," the study concluded.

In subsequent citation exercises, LLM-assisted participants demonstrated lower accuracy in source referencing compared to control groups. Subjective assessments also indicated reduced sense of intellectual ownership among AI-assisted writers regarding their produced content.

The research underscores critical implications for educational strategies. "Our findings highlight pressing concerns about learning capacity degradation," the authors emphasized. They advocate for an educational framework delaying AI integration until sufficient autonomous cognitive engagement has occurred, balancing immediate productivity gains with long-term cognitive development preservation.