Binary neural networks (BNNs) present a promising avenue for low-complexity and energy-efficient computation, yet their inherent non-linearity hinders interpretability and formal verification. Mohamed Tarraf, Alex Chan, Alex Yakovlev, and Rishad Shafik, all from Newcastle University, address this challenge by introducing a novel Petri net (PN)-based framework to model BNN operations as event-driven processes. This work is significant because it transforms traditionally opaque BNNs into transparent, analysable systems, enabling detailed examination of concurrency, state evolution, and causal dependencies. The researchers construct modular PN blueprints for key BNN components, composing them into a complete system-level model, and rigorously validate this model against a software-based BNN using Workcraft’s automated tools to establish properties such as 1-safeness and deadlock-freeness. Ultimately, this framework facilitates formal reasoning and verification, paving the way for the deployment of BNNs in safety-critical applications.
Researchers have developed a new framework for understanding and verifying binary neural networks (BNNs), a class of machine learning models prized for their efficiency and low energy consumption. These networks, which constrain weights and activations to binary values, have traditionally been difficult to analyse due to their complex, non-linear behaviour.
This opacity limits their use in safety-critical applications demanding robust guarantees of reliability and predictable operation. A Petri net (PN)-based approach models the internal operations of BNNs as a series of discrete events, exposing causal relationships and dependencies previously hidden within the network’s architecture.
By “eventizing” the BNN’s computations, the researchers created modular blueprints for core components such as activation, gradient calculation, and weight updates, then combined these into a complete system-level model. This model was then rigorously tested against a standard software implementation of a BNN to ensure accuracy. Formal verification techniques, utilising structural and reachability checks, were employed to establish properties like 1-safeness, deadlock-freeness, mutual exclusion, and correct-by-construction causal sequencing.
The framework’s scalability and complexity were assessed at various levels, segment, component, and system, using automated tools within the Workcraft suite. This PN-based approach offers a crucial step towards causal introspection of BNNs, enabling formal reasoning and verification that were previously unattainable. The ability to trace dependencies and validate behaviour opens up possibilities for deploying BNNs in domains where dependability is paramount, such as fault-tolerant satellite control and onboard system health monitoring.
Transparent and event-driven BNNs can be formally reasoned about, paving the way for more trustworthy and reliable machine learning systems. The framework and associated tooling provide a principled foundation for integrating high-performance machine learning into safety-critical settings.
Petri net modelling confirms BNN correctness and quantifies structural complexity
The constructed Petri net models demonstrate 1-safeness, deadlock-freeness, mutual exclusion, and correct-by-construction causal sequencing across core BNN components. Verification against reachability and structural checks confirmed these properties, establishing a foundation for reliable operation. Segment-level analysis revealed that the average model size for a single BNN segment, comprising activation, gradient computation, and weight update, is 17 places and 23 transitions.
Component-level assessment, examining the composition of multiple segments, showed a linear increase in model size, reaching an average of 68 places and 89 transitions for a four-segment component. System-level evaluation of the complete BNN model, encompassing all segments and components, resulted in a total of 272 places and 341 transitions. This comprehensive model allows for detailed analysis of concurrency, ordering, and state evolution within the BNN.
Structural scalability was quantitatively assessed, revealing a consistent relationship between model size and complexity at each level of granularity. The complexity, measured by the ratio of transitions to places, remained relatively stable across segments (1.35), components (1.31), and the full system (1.25), indicating predictable growth in computational demands. Proxy places, a feature of the Workcraft toolset, were instrumental in managing visual complexity, reducing arc clutter and enhancing readability without altering the underlying semantics of the model.
Modelling Binary Neural Network Operation via Event-Driven Petri Nets
Petri nets (PNs) underpin this work, serving as a formal modelling language to represent the operational dynamics of binary neural networks (BNNs). These nets, a graphical and mathematical modelling tool, capture causality, concurrency, and state transitions within the BNN through tokens and transitions. The research constructs modular PN blueprints for fundamental BNN components, activation, gradient computation, and weight updates, and then composes these into a comprehensive, system-level model.
This modular approach facilitates analysis and verification by breaking down the complex network into manageable, interconnected segments. The methodology prioritises event-driven modelling, explicitly representing each operation within the BNN as a discrete event or transition within the PN. This “eventization” exposes the causal relationships and dependencies inherent in the BNN’s computations, allowing for fine-grained analysis of how data flows and influences subsequent operations.
By modelling the BNN’s behaviour in this manner, the study moves beyond post-hoc approximations offered by techniques like SHAP and LIME, instead providing a mechanistic view of the network’s internal processes. Validation of the composed PN model proceeds through comparison with a reference software-based BNN implementation, ensuring the PN accurately reflects the behaviour of the original network.
Formal verification is then undertaken using the Workcraft toolset, leveraging its Petrify and Mpsat backends. Specifically, the research verifies 1-safeness, deadlock-freeness, mutual exclusion, and correct-by-construction causal sequencing. Workcraft’s automated measurement tools are employed to assess the scalability and complexity of the PN model at various levels, segment, component, and system, providing quantitative insights into the model’s efficiency and manageability.
The Bigger Picture
The relentless drive towards artificial intelligence is increasingly focused on efficiency, and this work represents a significant step forward in making streamlined neural networks genuinely trustworthy. Binary neural networks promise substantial reductions in computational cost and energy consumption, but their very simplicity has been a barrier to verification.
Unlike their full-precision counterparts, the ‘black box’ nature of these networks has made it difficult to understand why they make the decisions they do, a critical flaw for applications where safety and reliability are paramount. This research tackles that opacity head-on, not by altering the networks themselves, but by creating a formal ‘blueprint’ using Petri nets.
By modelling the internal operations as a series of discrete events, the researchers have effectively built a transparent window into the network’s logic. This allows for rigorous testing, confirming properties like deadlock-freedom and correct sequencing, that are simply impossible with traditional methods. The ability to formally verify these properties is a game-changer, potentially unlocking the use of binary networks in areas like autonomous vehicles or medical diagnostics.
However, the scalability of this Petri net approach remains a key question. While the framework demonstrates verification at a component level, the complexity increases rapidly as the network grows. Furthermore, this work focuses on the structure of the network, not its learning process. Understanding how training affects the verifiability of these binary networks is a crucial next step. Future efforts might explore automated translation of existing binary networks into Petri net models, or even the development of training algorithms that inherently produce verifiable architectures.
👉 More information
🗞 Eventizing Traditionally Opaque Binary Neural Networks as 1-safe Petri net Models
🧠 ArXiv: https://arxiv.org/abs/2602.13128
