On March 31, 2025, researchers Richard J. Tong, Marina Cortês, Jeanine A. DeFalco, Mark Underwood, and Janusz Zalewski published A First-Principles Based Risk Assessment Framework and the IEEE P3396 Standard, introducing a novel approach to evaluating risks in generative AI systems.
The framework distinguishes between process risks (risks during system development or operation) and outcome risks (risks resulting from system outputs), advocating for prioritizing the latter in governance. Central to their work is an information-centric ontology that categorizes AI outputs into four fundamental types, enabling systematic harm identification and precise stakeholder responsibility attribution. This research supports the IEEE P3396 Recommended Practice, aiming to enhance risk management, safety, trustworthiness, and accountability in AI applications.
The paper introduces a risk assessment framework for AI systems, distinguishing between process risks (arising during system development or operation) and outcome risks (manifesting in outputs and real-world effects). It argues that governance should prioritize outcome risks. The framework uses an information-centric ontology classifying AI outputs into four categories: perception-level, knowledge-level, decision/action plan, and control tokens. This classification enables systematic identification of potential harms and precise attribution of responsibility to stakeholders, including developers, deployers, users, and regulators.
In the rapidly evolving landscape of artificial intelligence, the IEEE P3396 recommended practice emerges as a pivotal framework for assessing risks in AI systems. This initiative is not merely another set of guidelines but represents a significant shift in how we approach AI governance. By distinguishing between process and outcome risks, IEEE P3396 offers a structured methodology that prioritizes the real-world impacts of AI technologies.
The IEEE P3396 framework introduces a novel approach by categorizing risks into two primary domains: process and outcome. Process risks encompass issues related to the development and operation of AI systems, such as biased training data or inadequate transparency during model deployment. However, the framework’s true innovation lies in its emphasis on outcome risks—those adverse effects resulting directly from an AI system’s outputs.
This focus is crucial because it shifts the narrative from theoretical concerns about AI technologies to tangible impacts. For instance, a transparent AI model might still generate harmful content, underscoring that process alone does not guarantee safety. By prioritizing outcomes, IEEE P3396 ensures that assessments are grounded in real-world consequences rather than abstract fears.
At the heart of IEEE P3396 is an information-centric ontology divided into four categories:
- Perception-level: This involves AI-generated content designed to simulate perceptual experiences, such as deepfake videos or synthetic images. Risks here include misinformation and manipulation of public perception.
- Knowledge-level: Here, AI produces informative content, like answers to factual questions or summaries. Risks revolve around inaccuracies that could mislead users or spread false information.
- Decision/Action Plan: This category includes AI recommendations or decisions, such as medical advice or autonomous strategies. Risks here are high-stakes, potentially leading to harmful actions based on flawed AI outputs.
- Control Tokens: Involving direct access control, such as generated passwords or API keys, these pose risks of unauthorized access and potential system breaches.
Each category is meticulously examined for its unique risk profiles, ensuring a comprehensive approach to identifying and mitigating harms.
The framework’s emphasis on outcome risks represents a strategic shift in AI governance. Unlike traditional approaches that focus on process transparency or bias audits, IEEE P3396 advocates for evaluating systems based on their actual impacts. This approach aligns with emerging regulations, such as the European Union’s AI Act, which categorizes risks by application rather than technology type.
By prioritizing outcome risks, the framework encourages a proactive stance where governance strategies are tailored to mitigate real-world harms effectively. This shift not only enhances safety but also fosters trust in AI technologies by ensuring that their deployment is both responsible and impactful.
IEEE P3396’s innovative approach to risk assessment marks a significant advancement in AI governance. Focusing on outcome risks ensures that evaluations are grounded in tangible impacts rather than abstract concerns. The framework’s structured methodology, encompassing four distinct information categories, provides a robust toolset for identifying and mitigating harms. As the regulatory landscape evolves, IEEE P3396 stands as a beacon of proactive governance, guiding the responsible development and deployment of generative AI technologies.
Conclusion: A Path Forward for Responsible AI
In conclusion, IEEE P3396 offers a comprehensive and forward-thinking approach to managing risks in AI systems. By prioritizing outcome risks and employing an information-centric ontology, it sets a new standard for governance that is both practical and impactful. As we navigate the complexities of AI integration, frameworks like IEEE P3396 will be instrumental in ensuring that technological advancements serve societal good responsibly and effectively.
More information
A First-Principles Based Risk Assessment Framework and the IEEE P3396 Standard
DOI: https://doi.org/10.48550/arXiv.2504.00091
