Like Humans, ChatGPT Relies on Memory and Examples for Language Generation

A study conducted by researchers at the University of Oxford and the Allen Institute for AI (Ai2) has revealed that large language models (LLMs), including those similar to ChatGPT, generate language through analogical reasoning rather than relying on grammatical rules. The research, published in Proceedings of the National Academy of Sciences, involved comparing how GPT-J, an open-source LLM, handles derivational morphology with human participants. By testing the model’s responses to made-up adjectives and observing its choice of suffixes like -ness or -ity, the study demonstrated that LLMs exhibit analogical generalization akin to humans but differ in their processing mechanisms. This finding contributes to our understanding of how AI models learn and generate language, highlighting parallels with human cognition while underscoring unique computational approaches. The research underscores the ongoing collaboration between linguistics and AI, offering insights into the development of more sophisticated AI systems.

Human vs. AI Analogical Generalization

Humans excel in creating novel words by leveraging semantic relationships and cultural contexts. Our ability to draw from a rich tapestry of experiences allows us to generate meaningful terms that resonate within specific linguistic and social frameworks. For instance, the creation of a new word often involves understanding not just its components but also its intended use and impact.

In contrast, AI systems, particularly large language models (LLMs), rely on statistical patterns derived from vast datasets. While these models can mimic human-like strategies—such as appending common suffixes like “-ness” or “-ity”—their approach is fundamentally different. An example given is the choice between “friquishness” and “friquishity,” where an AI might select based on statistical likelihood rather than semantic depth.

Implications for AI Research

The limitations of current AI models highlight a critical area for research: enhancing creativity beyond mimicry. Depending on frequency-based learning, current systems often produce outputs that lack the nuance and context inherent in human language creation. This can result in awkward or nonsensical terms that fail to connect with users.

Future AI development should focus on integrating semantic understanding and contextual reasoning to address this gap. By moving beyond statistical patterns, models could generate more natural and meaningful expressions, akin to human creativity.

The article underscores the need for innovation in AI research to bridge the divide between mimicry and genuine creativity. Emphasizing the importance of context and meaning, it advocates for advanced methods that enable AI to produce rich and meaningful outputs as created by humans. This shift could lead to more intuitive and effective language models, enhancing their utility across various applications. In essence, while current AI excels at replicating existing patterns, the future lies in fostering a deeper understanding of semantics and context, paving the way for truly creative and human-like language generation.

More information
External Link: Click Here For More

The Neuron

The Neuron

With a keen intuition for emerging technologies, The Neuron brings over 5 years of deep expertise to the AI conversation. Coming from roots in software engineering, they've witnessed firsthand the transformation from traditional computing paradigms to today's ML-powered landscape. Their hands-on experience implementing neural networks and deep learning systems for Fortune 500 companies has provided unique insights that few tech writers possess. From developing recommendation engines that drive billions in revenue to optimizing computer vision systems for manufacturing giants, The Neuron doesn't just write about machine learning—they've shaped its real-world applications across industries. Having built real systems that are used across the globe by millions of users, that deep technological bases helps me write about the technologies of the future and current. Whether that is AI or Quantum Computing.

Latest Posts by The Neuron:

UPenn Launches Observer Dataset for Real-Time Healthcare AI Training

UPenn Launches Observer Dataset for Real-Time Healthcare AI Training

December 16, 2025
Researchers Target AI Efficiency Gains with Stochastic Hardware

Researchers Target AI Efficiency Gains with Stochastic Hardware

December 16, 2025
Study Links Genetic Variants to Specific Disease Phenotypes

Study Links Genetic Variants to Specific Disease Phenotypes

December 15, 2025