On April 23, 2025, Chaeyeon Lim published a thought-provoking article titled DeBiasMe: De-biasing Human-AI Interactions with Metacognitive AIED Interventions, exploring how metacognitive support and adaptive scaffolding can help university students recognize and mitigate cognitive biases when interacting with AI systems.
Generative AI increasingly transforms academic environments, yet understanding human biases in AI interactions remains critical. This paper advocates for metacognitive AI literacy interventions to help university students critically engage with AI and address biases across Human-AI interaction workflows. It presents frameworks focusing on deliberate friction for bias mitigation, bi-directional input-output interaction, and adaptive scaffolding. These are illustrated through the DeBiasMe project, enhancing awareness of cognitive biases while empowering user agency in AI interactions.
The paper invites stakeholders to discuss design and evaluation methods for scaffolding mechanisms, bias visualization, and analysis frameworks.Artificial intelligence (AI) is reshaping the landscape of work, education, and human interaction at an unprecedented pace. While its potential to augment human capabilities is clear, recent research underscores the importance of adopting a deliberate approach to ensure that AI truly complements, rather than supplants, human ingenuity. This article examines the emerging challenges and opportunities in human-AI collaboration, with a focus on metacognition, responsible design, and balanced use of AI tools.
One of the most pressing issues in the era of AI is the development of metacognitive skills—thinking about how we think. A 2024 study by Sidra Sidra and Claire Mason highlights that as AI becomes more deeply integrated into the workforce, individuals must cultivate the ability to reflect on their own thinking processes when using AI tools. This involves understanding when and how to trust AI outputs, identifying biases in algorithms, and recognizing situations where human judgment remains indispensable.
Research by Lev Tankelevitch and colleagues has explored the metacognitive demands of generative AI systems like ChatGPT. Their findings reveal that users often struggle with critically evaluating AI-generated content, leading to an over-reliance on these tools. This can result in a decline in independent problem-solving skills and a diminished ability to assess the validity of information.
To address these challenges, researchers advocate for human-centered approaches to AI development. Ben Shneiderman has proposed three key principles: designing AI systems that augment rather than replace human capabilities, ensuring transparency in algorithmic decision-making, and enabling users to evaluate and refine AI outputs. These principles align with the broader goal of creating tools that empower humans to think more critically and creatively.
While AI offers immense potential, there is growing concern about its overuse. A 2024 study by Chunpeng Zhai and colleagues found that excessive reliance on AI dialogue systems can negatively impact students’ cognitive abilities, particularly in areas requiring independent thought and problem-solving. This underscores the importance of maintaining a balance between AI use and human agency.
Frameworks for responsible innovation, such as those proposed by researchers like Ben Shneiderman, emphasize the need to design AI tools with ethical considerations at their core. These frameworks aim to foster a culture where AI is seen not as a replacement for human intelligence but as a tool that enhances it.
As AI continues to evolve, the responsibility lies with developers, educators, and policymakers to create systems that promote metacognition, transparency, and balance. By fostering a deeper understanding of how humans interact with AI and designing tools that augment rather than replace human capabilities, we can ensure that AI truly becomes a force for good in the 21st century.
The path forward requires a collective effort to prioritize education, ethical design, and responsible use. Only then can we unlock the full potential of human-AI collaboration while safeguarding the critical thinking skills that make us uniquely human.
👉 More information
🗞 DeBiasMe: De-biasing Human-AI Interactions with Metacognitive AIED (AI in Education) Interventions
🧠 DOI: https://doi.org/10.48550/arXiv.2504.16770
