Podcasts

  • S3 E3: With... Noor Afasa - On this episode, Mia and Sam are joined by Bradford Young Creative and poet Noor Afasa! Noor has been on placement at the Museum as part of her apprentic...
    1 week ago

Monday, May 12, 2025

Monday, May 12, 2025 12:54 am by M. in ,    No comments
A new paper in Nature's Humanities and Social Sciences Communications has caught our attention:
Enhancing student reading performance through a personalized two-tier problem-based learning approach with generative artificial intelligence
Changqin Huang, Yihua Zhong, Yongzhi Li, Xizhe Wang, Zhongmei Han, Di Zhang & Ming Liu
Humanities and Social Sciences Communications volume 12, Article number: 645 (2025) 

Reading ability plays a vital role in the academic success of students. Problem-based learning (PBL) helps develop deep engagement with the reading materials and higher-order reading skills. However, conventional PBL (C-PBL) activities ignore differences in students’ cognitive levels and fail to provide timely and targeted feedback and guidance to each student. As a result, many students are unable to actively engage in PBL-based reading activities. To address these problems, this study proposes a personalized two-tier PBL (PT-PBL) approach based on generative artificial intelligence (GenAI). It provides a more personalized and refined design for PBL activities to promote personalized reading learning for students. To examine the effectiveness of the proposed approach, 62 college students participated in a quasi-experiment, with the PT-PBL approach in the experimental group and the C-PBL approach in the control group. The results indicate that the PT-PBL approach significantly improves students’ reading performance and motivation. In addition, compared to students with lower engagement, this approach is more effective at improving the reading performance of highly engaged students. Interviews with students showed that those who used the PT-PBL approach focused more on reading tasks and reflected more frequently. The main contribution of this study is proposing a novel PT-PBL approach and providing empirical evidence of its effectiveness, while also creating opportunities for future research to further explore the positive impact of GenAI on reading.

This article investigates how generative AI, specifically ChatGPT, engages with and simulates human literary interpretation. Through the "ChatGPT as a Research Assistant" (CARA) framework, the authors evaluate whether the model can perform aspects of literary analysis typically carried out by human readers. Using a qualitative, critical methodology, they examine the model’s behavior when analyzing canonical texts, focusing on how it navigates character development, thematic depth, and ideological critique. The broader aim is to understand what it means for AI to "read" and how such reading reshapes epistemological and ethical frameworks in literary studies. The authors conclude that while ChatGPT can produce competent, sometimes sophisticated interpretations, its analyses tend to be shaped by dominant discourses, lack reflexivity, and show limited awareness of academic debates or historical power structures.

Jane Eyre is used as a primary case study to evaluate how ChatGPT simulates interpretive reading. The novel serves not just as content but as a diagnostic tool for exploring the AI’s interpretive frameworks. The authors prompt ChatGPT with questions about Jane's character, feminist themes, Bertha Mason’s depiction, and the novel's colonial context. The findings show that:

  • ChatGPT reliably identifies Jane Eyre as a feminist figure, often framing her as an early symbol of female agency and autonomy. However, the model tends to rely on generic, liberal-feminist tropes, avoiding deeper or more critical feminist discourses, such as materialist or intersectional critiques.

  • When discussing Bertha Mason, ChatGPT can acknowledge postcolonial critiques (e.g., Edward Said or Gayatri Spivak) and recognizes Bertha as a symbol of racialized and colonial othering. However, the model does not consistently link Bertha’s portrayal to broader imperial ideologies, nor does it grapple with the ethical implications of representing madness and race in the way literary scholars often do.

  • The model often echoes familiar readings found in mainstream educational or online sources, suggesting that its training data reflects popular rather than scholarly discourse. For example, it draws connections between Jane and self-reliance but lacks the capacity for original or critical synthesis.

  • Importantly, the authors note that ChatGPT performs well when asked to mimic a formal essay structure or provide textbook-style answers. But when asked to critique Jane Eyre from specific theoretical standpoints (e.g., Marxist, psychoanalytic, or decolonial), the responses become vague or formulaic, revealing the model’s limits in understanding methodological nuance.

0 comments:

Post a Comment