According to a recent ISACA article, a risk assessment approach involves examining the connections between assets, processes, threats, vulnerabilities, and other environmental factors ^{7}. While various methods exist, quantitative and qualitative analyses are the most commonly utilized ^{7 8}.
The chosen methodology at the outset of the decision-making process should offer a quantitative account of the impact of risk and security issues, including risk classification and the creation of a risk register. Additionally, it should include qualitative statements elucidating the significance and suitability of controls and security measures for mitigating these risk areas ^{7 9}. The qualitative approach relies on the experience and expertise of the risk analyst, while the quantitative approach employs mathematical models and statistical analysis ^{7 8}. Furthermore, the degree of uncertainty in the risk assessment process plays a crucial role in determining the most appropriate methodology, contingent on the available information about the risk and the level of confidence in that information ^{7}
As per Wikipedia, Possibility Theory, an offshoot of Fuzzy Logic, is a mathematical, uncertainty framework; an alternative to probability theory ^{3}. In terms of events, they can be either possible or probable. Possibility theory employs measures of possibility and necessity within the range of [0, 1], encompassing impossibility to possibility and unnecessary to necessary, respectively ^{3}. Essentially, possibility theory quantifies the likelihood or necessity of an event occurring, ranging from impossible [0] to possible [1] for possibility and from unnecessary [0] to necessary [1] for necessity ^{3}. In CyberSecurity, the degree of possibility of a threat exploiting a vulnerability signifies the likelihood of such an event [occurring], while the degree of necessity reflects the impact [level of damage] of this occurrence.
In contrast to probability, which assumes a definite numerical value for an event, fuzzy logic deals with fuzzy sets covering ranges of values that are not mutually exclusive. For instance, in fuzzy logic, risk may be described using terms like "very high," "high," or "moderately high," each encompassing a range of risk within [0, 1], akin to a qualitative analysis approach ^{4}.
Fuzzy logic and probability are distinct concepts. Fuzzy logic pertains to partial truth [0, 1], while probability gages the likelihood of an event, based on well-known recent occurrences (How certain are we?). Predicting CyberSecurity events involves a high degree of uncertainty, making it challenging to determine the probability of a threat exploiting a vulnerability. In actuality, this probability is better characterized as a possibility ("not if, but when" scenario). The degree of possibility offers a more suitable gage of the likelihood of a threat exploiting a vulnerability compared to probability ^{5}. Nevertheless, quantifying the degree of possibility remains a subject of ongoing research in CyberSecurity ^{6}. Despite this belief construct, it remains clear that in cybersecurity, a breach is imminent and there is always a potential measure of the likelihood of a threat exploiting a vulnerability, not the probability. Given its superior suitability for assessing the likelihood of a threat exploiting a vulnerability, the next question is how to effectively measure this degree of possibility. What is the most accurate and cost-effective approach to risk measurement in CyberSecurity?
Open FAIR, or Open Factor Analysis of Information Risk (OpenFAIR^{TM}), is a risk analysis methodology, requiring two standards: Open Risk Taxonomy Standard (O-RT), and the Open Risk Analysis Standard (O-RA). (FAIR)™ was crafted to aid organizations in their transitioning from a compliance-driven, to a risk-centered approaches of information security paradigm. It furnishes a standardized taxonomy and ontology for information and operational risk, where a framework for establishing data collection criteria, measurement scales for risk factors, and a quantitative, probabilistic model for analyzing intricate risk scenarios are applied to Risk Analysis ^{1}.
The FAIR^{TM} Risk Analysis, created by Jack Jones (overseen by the FAIR Institute)^{1}, is grounded in the following principles:
FAIR^{TM} is the only international standard, quantitative model for information security and operational risk that enables organizations to understand their loss [potential] exposure in a well-defined, historical, and statistical financial terms, which drives business decisions in an informed, reliable, and argument-proof matter. The FAIR™ methodology has become the only international standard [Value at Risk (VaR)] model for CyberSecurity and Operational Risk ¹.
To conduct a thorough Quantitative Risk Analysis using FAIR™, follow these steps:
In the event that a threat materializes and causes harm to the organization, a response is necessary to mitigate the situation. There are four types of risk responses available:
The FAIR™ model furnishes a clear and comprehensive financial report that delineates the various components of risk in an easily understandable manner, aiding in communication with Corporate Governance. It establishes a shared language for discussing risks, enables a portfolio-based view of organizational risks, and supports advanced risk modeling, culminating in a robust risk response framework.
In summary, Open FAIR™ is a powerful framework that empowers organizations in their quest to quantify cyber risk and operational risk in a manner that illuminate the actual cost in annualized financial terms. Moreover, FAIR™ provides a common language for discussing risks, enables portfolio views of organizational risks, and facilitates advanced risk modeling ^{1} that supports the organization's goals, objectives, and decision-making process in a cost-effective manner.
Johnny Sandaire PhD, CISSP, CC, OpenFAIR, MCAD