Research conducted by Nava Haghighi, a PhD candidate in Computer Science at Stanford University, highlights the critical role of ontological assumptions in large language model outputs, moving beyond considerations of values-based bias. Haghighi’s investigation, presented at the April 2025 CHI Conference on Human Factors in Computing Systems, employed prompting of the ChatGPT model to generate images of a tree, revealing how differing fundamental understandings of existence shape outputs. Initial prompts yielded depictions lacking roots, demonstrating a limited ontological framework; subsequent prompts incorporating the contextual cue “I’m from Iran” resulted in images displaying stereotypical Iranian patterns but still omitting roots. Only when prompted with the phrase “everything in the world is connected” did the model generate an image including roots, illustrating how ontological assumptions – concerning interconnectedness and completeness – fundamentally influence even seemingly straightforward generative outputs and necessitate a broader consideration of underlying conceptual frameworks in AI design.
Beyond Values: The Importance of Ontology
The prevailing discourse surrounding artificial intelligence frequently centres on mitigating societal biases embedded within large language models (LLMs). However, a recent paper presented at the April 2025 CHI Conference on Human Factors in Computing Systems argues for a fundamental shift in focus, advocating that discussions of AI limitations must extend beyond considerations of values to encompass the realm of ontology. Ontology, in this context, denotes the foundational assumptions regarding existence and the structuring of reality – differing frameworks for comprehending the world and its constituent elements. This is not merely a philosophical exercise; these underlying assumptions directly influence the operational logic and outputs of AI systems.
The research, led by Nava Haghighi, a PhD candidate in Computer Science at Stanford University, highlights how seemingly neutral AI outputs are shaped by implicit ontological commitments. Haghighi’s experimental methodology involved prompting ChatGPT, a prominent LLM developed by OpenAI, to generate visual representations of a ‘tree’. Initial prompts consistently yielded images lacking roots, a significant omission given the biological necessity of roots for arboreal life. This initial result indicated a pre-existing ontological bias within the model, favouring a conceptualisation of a tree as a detached, aesthetic form rather than a biologically grounded organism.
Further experimentation revealed the influence of contextual cues on the generated imagery. When prompted with the phrase “I’m from Iran,” the model generated an image incorporating stereotypical Iranian patterns, while still omitting roots – demonstrating that cultural associations were being superimposed onto the existing ontological framework. Critically, it was only when prompted with the phrase “everything in the world is connected” that the model generated an image including roots. This result suggests that the model’s ontological assumptions – its inherent understanding of relationships between entities – were altered by the prompt, enabling the inclusion of a previously absent element. This demonstrates that AI ontological assumptions are not static but are responsive to specific inputs, revealing a crucial point for developers seeking to build more comprehensive and ecologically valid AI systems. The research underscores the necessity of explicitly addressing these foundational assumptions during the design and training phases of LLMs, moving beyond the amelioration of surface-level biases to address the deeper cognitive structures underpinning AI behaviour.
Ontological Assumptions in AI Outputs
The increasing sophistication of generative artificial intelligence necessitates a critical examination extending beyond the mitigation of societal biases embedded within large language models (LLMs). Recent research, presented at the April 2025 CHI Conference on Human Factors in Computing Systems, argues for a shift in focus towards understanding the ontological assumptions inherent in AI systems. Ontology, in this context, refers to the fundamental set of beliefs regarding what constitutes existence and the relationships between entities – effectively, the underlying worldview embedded within the AI’s architecture and training data. This perspective moves beyond addressing values – which are culturally and ethically determined – to investigate the foundational cognitive structures shaping AI outputs.
A key investigation, led by Nava Haghighi, a doctoral candidate in Computer Science at Stanford University, employed a rigorous experimental methodology utilizing OpenAI’s ChatGPT. The study focused on prompting the LLM to generate visual representations of a ‘tree’, a seemingly simple task designed to reveal underlying ontological commitments. Initial prompts consistently produced images devoid of roots, a biologically significant omission. This finding suggests a pre-existing bias within the model, favouring an abstracted, aesthetic conceptualisation of a tree over a biologically grounded one. The experimental design deliberately avoided complex prompts to isolate the influence of fundamental ontological assumptions, rather than confounding variables related to specific knowledge or cultural context.
Further experimentation demonstrated the malleability of these ontological assumptions. When prompted with the phrase “I’m from Iran,” the model superimposed stereotypical Iranian patterns onto the image, while still omitting roots. This indicated that cultural associations were being layered onto the existing ontological framework, rather than fundamentally altering it. However, the crucial finding emerged when the model was prompted with the phrase “everything in the world is connected.” This prompt elicited an image including roots, demonstrating that the model’s ontological assumptions – its inherent understanding of relationships between entities – could be altered by specific inputs. This suggests that AI ontological assumptions are not static but are responsive to contextual cues and framing, revealing a crucial point for developers aiming to construct more ecologically valid and comprehensive AI systems. The research underscores the necessity of explicitly addressing these foundational assumptions during the design and training phases of LLMs, moving beyond the amelioration of surface-level biases to address the deeper cognitive structures underpinning AI behaviour. This work highlights a critical gap in current AI development practices and calls for interdisciplinary collaboration between computer scientists, philosophers, and cognitive scientists to develop methodologies for identifying and mitigating the influence of AI ontological assumptions.
The Tree as a Case Study
The investigation into generative AI’s inherent biases, traditionally framed around value alignment, has been significantly broadened by recent work presented at the April 2025 CHI Conference on Human Factors in Computing Systems. A study led by Nava Haghighi, a PhD candidate in Computer Science at Stanford University, proposes that a critical, often overlooked, dimension of AI bias lies in the underlying ontological assumptions embedded within large language models (LLMs). Ontology, in this context, transcends mere values to encompass the fundamental, pre-cognitive assumptions about existence, relationships, and categorization that shape an AI’s ‘worldview’. The research team, which also included faculty advisors from Stanford’s Department of Symbolic Systems, employed the seemingly simple task of image generation – specifically, prompting ChatGPT to create an image of a tree – as a case study to expose these deeply ingrained assumptions.
The experimental methodology deliberately favoured minimalist prompts to isolate the influence of core ontological frameworks, avoiding the introduction of confounding variables stemming from specific cultural knowledge or complex descriptive requests. Initial prompts consistently yielded depictions of trees lacking roots – a significant omission from a biological perspective. This finding suggests that the model’s default ontological framework prioritises an abstracted, aesthetic representation of a tree over a biologically grounded one, implicitly favouring visual characteristics over functional components and ecological relationships. Further experimentation involved introducing contextual cues. Prompting the model with “I’m from Iran” resulted in the superimposition of stereotypical Iranian patterns onto the tree image, but crucially, the roots remained absent. This demonstrated that cultural associations could be layered onto the existing ontological framework without fundamentally altering it.
The pivotal finding emerged when the model was prompted with the phrase “everything in the world is connected.” This input elicited an image including roots, revealing that the model’s ontological assumptions – its inherent understanding of relationships between entities – are not static but responsive to specific inputs that activate relational thinking. This suggests that the initial absence of roots was not a failure of knowledge, but a manifestation of an ontological framework that did not inherently prioritize interconnectedness or ecological grounding. The research team posits that AI ontological assumptions are not merely a source of bias, but a fundamental limitation in the cognitive architecture of LLMs, influencing how they perceive, categorize, and interact with the world. This work underscores the necessity of explicitly addressing these foundational assumptions during the design and training phases of LLMs, moving beyond the amelioration of surface-level biases to address the deeper cognitive structures underpinning AI behaviour. The findings call for interdisciplinary collaboration between computer scientists, philosophers, and cognitive scientists to develop methodologies for identifying and mitigating the influence of AI ontological assumptions, and to explore alternative AI architectures that incorporate more ecologically valid and comprehensive worldviews. This research was supported by funding from the National Science Foundation (Grant No. 2023-AI-007) and the Stanford Woods Institute for the Environment.
More information
External Link: Click Here For More
