PRISM Risk Signal Framework: Hierarchy-Based Red Lines for AI Behavioral Risk
Seulki Lee
Feedback
Why It Matters
This paper presents a significant shift in AI safety paradigms by focusing on the underlying reasoning structures of AI systems, potentially preventing harmful outputs before they occur.
Contributions
- Introduces a taxonomy of 27 behavioral risk signals based on AI reasoning hierarchies.
- Demonstrates the framework's effectiveness using empirical data from multiple AI models.
Insights
- Red lines in AI safety can be more effectively set at the level of reasoning hierarchies rather than individual cases.
Limitations
- The framework's effectiveness may vary across different AI architectures and applications.
Tags
- alignment
- evaluation
- interpretability
- security
Abstract
arXiv:2604.11070v1 Announce Type: new Abstract: Current approaches to AI safety define red lines at the case level: specific prompts, specific outputs, specific harms. This paper argues that red lines can be set more fundamentally -- at the level of value, evidence, and source hierarchies that govern AI reasoning. Using the PRISM (Profile-based Reasoning Integrity Stack Measurement) framework, we define a taxonomy of 27 behavioral risk signals derived from structural anomalies in how AI systems prioritize values (L4), weight evidence types (L3), and trust information sources (L2). Each signal is evaluated through a dual-threshold principle combining absolute rank position and relative win-rate gap, producing a two-tier classification (Confirmed Risk vs. Watch Signal). The hierarchy-based approach offers three advantages over case-specific red lines: it is anticipatory rather than reactive (detecting dangerous reasoning structures before they produce harmful outputs), comprehensive rather than enumerative (a single value-hierarchy signal subsumes an unlimited number of case-specific violations), and measurable rather than subjective (grounded in empirical forced-choice data). We demonstrate the framework's detection capacity using approximately 397,000 forced-choice responses from 7 AI models across three Authority Stack layers, showing that the signal taxonomy successfully discriminates between models with structurally extreme profiles, models with context-dependent risk, and models with balanced hierarchies.