Researchers are tackling the challenge of managing complex, dynamic workloads in fog and edge computing environments, where low latency and adaptability are crucial. Saeed Akbar, from Sarhad University of Science & Information Technology, Muhammad Waqas of the GIK Institute of Engineering Sciences & Technology, and Rahmat Ullah from the University of Essex, et al., introduce a novel framework called Agentic Fog (AF) that moves beyond computationally expensive Large Language Models. This work is significant because it presents a policy-driven, decentralised approach to fog node coordination, formalised as an exact potential game, guaranteeing convergence and stability even with node failures and asynchronous updates. Simulations reveal that AF outperforms existing methods like greedy heuristics and integer linear programming, offering lower latency and improved efficiency under fluctuating demands, and demonstrating robust performance across varying system conditions.
Fog architecture’s proximity to end users supports data-intensive applications often underserved by centralized cloud paradigms . Mesh topologies facilitate peer-to-peer (p2p) coordination among Fog Nodes (FNs), enhancing resilience and efficiency . However, most modern fog systems remain largely reactive, relying on heuristics, local optimization, or periodically recalculated centralized models, which struggle with real-world uncertainty and limited visibility . These approaches assume complete knowledge of system state, unsustainable in large-scale, heterogeneous, and failure-prone deployments .
Furthermore, they often lack focus on long-term behaviour under shifting demand and exhibit poor scalability with Integer Linear Programming (ILP) under dynamic workloads . System-level intelligence arises from coordinated agent actions, but early agent-based systems were limited by predefined rules and static knowledge . Recent studies utilise statistical learning and Reinforcement learning (RL) to improve agent adaptability, but these models typically address single tasks . Large foundation models have emerged to enable complex workflows, yet they lack persistent goals and autonomy . Agentic AI (AAI) is a paradigm where autonomous agents with partial knowledge interact via shared memory and policy alignment to achieve distributed intelligence, allowing agents to learn and refine behaviour beyond isolated tasks .
Current investigations primarily focus on LLM-driven agents, but their computational overhead and stochastic behaviour are unsuitable for infrastructure-level systems like fog computing, which require efficiency, predictability, and analyzability . Therefore, a non-LLM-driven AAI framework is needed to address these challenges. This work formalizes a mesh-fog architecture as a natural instantiation of AAI, where FNs exhibit local autonomy, partial observability, p2p interaction, and extended operating time, providing a theoretical model of coordination, convergence, and stability . The proposed framework, Agentic Fog (AF), presents FNs as autonomous agents coordinating through shared memory and localized p2p interactions .
To achieve decentralized optimization under bounded rationality, the system decomposes the system-level objective into policy guidance, with coordination following a Potential Game (PG) formulation, ensuring formal convergence guarantees and stability under asynchronous updates and partial failures. Contributions and Novelty of the Proposed Work This study provides a system-centric definition of AAI independent of LLMs, emphasizing persistent autonomy, policy-driven decision-making, and analyzable coordination dynamics, addressing conceptual ambiguity and repositioning AAI for resource-constrained infrastructures . The work models fog computing as a collection of autonomous, policy-driven agents, providing a unified formal model capturing shared memory, decentralized coordination, and time-scale separation between policy alignment and local decision-making . It demonstrates that decentralized fog coordination induces an exact PG, yielding formal guarantees of convergence under asynchronous bounded-rational best-response dynamics .
Collectively, the framework is a principled, analyzable, and infrastructure-compatible agentic architecture for fog computing, complementing both classical distributed control and LLM-centric agent systems. Comparison with Existing Fog Control Paradigms Classical ILP-based and RL-based systems, as well as Multi-Agent Systems (MASs), share similarities with the proposed system, such as decentralized decision-making and agent-based abstractions, but differ fundamentally in observability, coordination, formal analyzability, and temporal optimization . ILP-based systems operate on global snapshots assuming strong observability, lacking scalability and fragility under dynamic workloads . RL-based systems are adaptive but lack formal convergence guarantees and may face stability issues under partial observability .
In contrast, the proposed AF is a formally analyzable agentic framework combining persistent autonomy, shared memory, and policy-driven coordination . AGENTIC AI WITHOUT LARGE LANGUAGE MODELS AAI refers to a computational paradigm where autonomous entities (agents) maintain an internal state by partially observing the environment . An agent Ai is mathematically modeled as: Ai = ⟨Oi, ζi, πi, Λi, υi⟩, where Oi is the observation function, ζi represents the agent’s internal state, πi is the decision policy mapping observations and state to actions, Λi is the action space, and υi is an optional reward function guiding adaptation . A defining characteristic is that the policy function πi is algorithm-agnostic, potentially using optimization-based control, game-theoretic strategies, or learning-based methods, without requiring natural language reasoning or LLMs .
This separation enables formal analysis of system behaviour . Recent literature often describes LLM-based systems as agentic, using language models for planning, task decomposition, or tool invocation, but these systems do not fully address system constraints like timing, resources, or stability . Agentic behaviour arises from sustained autonomous interaction and policy-driven decision-making, not language-based reasoning alone . The foundation of AAI was laid decades ago in MAS, reactive and deliberative agent architectures, swarm intelligence, game theory, and RL, demonstrating that coherent global behaviour can emerge from local interaction, particularly under partial observability and limited communication .
Distributed control and game-theoretic learning have shown that stability and convergence can be achieved through structured local interactions and bounded-rational decision rules . RL extended agentic characteristics by enabling policy adaptation, though often with limited guarantees in non-stationary multi-agent settings . Networking and distributed systems have widely used agent-based approaches for adaptive routing, load balancing, cache placement, fault recovery, and resource scheduling .
Autonomous Fog Nodes via Potential Game Coordination offer
Scientists developed a novel fog computing architecture, termed AF, modelling fog nodes as policy-driven autonomous agents communicating via peer-to-peer interactions founded on shared memory and localised coordination. Instead, the study pioneers a return to classical agent autonomy, separating agentic intelligence from language-centric architectures to enable formal system analysis. Experiments employed an undirected graph, G = (V, E), to represent the fog infrastructure, where nodes, vi ∈ V, signify fog nodes with finite capacity and edges, Eij ∈ E, denote bidirectional links between them.
Each node hosts an autonomous agent, Ai = (ζi, Mi, πi), comprising a locally observable system state, ζi, finite local memory, Mi, and a decision policy, πi : (ζi, Mi, M∗) → Λi, mapping state and shared context to actions within action space, Λi. Crucially, the policy function is algorithm-agnostic, allowing instantiation using optimisation-based control, game-theoretic strategies, or learning-based methods without relying on natural language reasoning. The system harnessed a shared agentic memory, M∗ = {T, D, H}, comprising topology information, demand history, and historical policy outcomes, assumed to be eventually consistent and resilient to failures. Sensitivity analysis further confirmed the system’s ability to operate optimally under different memory and coordination parameters.
Researchers defined a global objective function, O∗ = αL + βC + γR, where L, C, and R represent end-to-end latency, aggregate cost, and resource utilisation respectively, allowing for quantifiable performance evaluation. This methodology enables precise control and formal verification of system behaviour, a critical advantage over LLM-based approaches lacking predictable execution semantics and formal guarantees in resource-constrained environments. Sensitivity analysis revealed a positive correlation between increased memory and improved latency, although benefits diminished beyond a certain point. This work clarifies the distinction between Artificial Intelligence (AI) and Large Language Model (LLM)-centric AI frameworks, highlighting the suitability of structured autonomy for infrastructure systems demanding formal analyzability and predictable execution. The authors acknowledge that the performance gains from increased memory exhibit diminishing returns after 100 episodes, suggesting a practical limit to memory allocation for optimal performance. Future research could explore strategies for dynamically adjusting memory allocation based on workload characteristics or investigate the application of this framework to more complex fog computing scenarios. Ultimately, this research establishes a theoretical and system-level foundation for AI in fog computing, offering a principled approach to designing non-LLM agentic architectures for next-generation edge and fog computing infrastructures.
👉 More information
🗞 Agentic Fog: A Policy-driven Framework for Distributed Intelligence in Fog Computing
🧠 ArXiv: https://arxiv.org/abs/2601.20764
