Next Article in Journal
Study on the Functional Improvement of Economic Damage Assessment for the Integrated Assessment Model
Previous Article in Journal
Life Cycle Assessment (LCA) of Cross-Laminated Timber (CLT) Produced in Western Washington: The Role of Logistics and Wood Species Mix
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Are We Objective? A Study into the Effectiveness of Risk Measurement in the Water Industry

1
Faculty of Veterinary and Agricultural Sciences, The University of Melbourne; Melbourne 3052, Australia
2
Melbourne School of Engineering, The University of Melbourne; Melbourne 3052, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2019, 11(5), 1279; https://doi.org/10.3390/su11051279
Submission received: 15 January 2019 / Revised: 25 February 2019 / Accepted: 25 February 2019 / Published: 28 February 2019
(This article belongs to the Section Economic and Business Aspects of Sustainability)

Abstract

:
A survey of 77 water practitioners within Melbourne, Australia, highlighted the lack of objectiveness within current risk scoring processes. Each water authority adopted similar processes, all of which adhere to the ISO31000 standard on Risk Management, and these were tested within this study to determine the “objective” nature of technical risk assessments such as these. The outcome of the study indicated that current risk measurement approaches cannot be seen as objective. This is due to the high variation in risk scores between individuals, which indicates a level of subjectivity. The study confirms previous research that has been undertaken in assessing the effectiveness of risk matrices. This research is novel in its testing of the water sector’s risk measuring practices and may be of value to other industries that utilize similar risk approaches. This research posits whether this subjectivity is due to inherent bias of either a psychological or cultural risk nature that could produce the varied scores.

1. Introduction

Technical risk measurement, widely used in engineering projects, have been historically touted as an objective process by which to measure risk. Through quantifying risk, proponents of this approach argue that it removes human subjectivity, providing the risk assessor with a risk measurement that is true and accurate. It provides the basis for decision-making, especially in options assessments. The technical measurement approach provides the basis for international standard ISO31000 on risk management and is used widely throughout the engineering profession and within the water industry [1]. Theorists have highlighted the flaws of these quantitative technical risk assessments, as much of the method relies on presuming rationality of individuals that undertake the assessment [2,3,4]. Within the water sector, risk measurement provides a decision-making tool for options assessments. Many innovative or sustainable options are proposed; however, these are at the whim of the risk assessor’s own perceptions, and their own scoring of such risks. Therefore, whether sustainable projects are given funding (the business element) predominantly would center on the risk rating afforded to them by the assessor, and the assessor’s own intrinsic values. This research aims to highlight that current risk assessment processes are not objective and are entirely dependent on the individual undertaking the assessment. This, then, carries implications for the use of taxpayer money in the public sector, and whether it is a fair use of the money when it is allocated according to a flawed scored system. Additionally, it could carry implications for ensuring adaptation to climate change, that is, if the risk assessors carry personal values that are not in agreement with adaptation approaches.
This research explores the supposed “objective” nature of the technical risk measurement approach adopted by water practitioners in the public sector to determine whether it is effective at rationally and accurately measuring a risk. Previous research [2,4] has emphasized that the risk matrix approach is flawed in its measurement of risk, and this is tested in this study to determine whether this also applies in the water sector’s handling of risk. Water professionals in Melbourne, Australia, were recruited for this study and asked to undertake risk assessments for seven fictional projects, both familiar and unfamiliar, to determine whether there was a substantial difference in scores. This provides valuable insight into the effectiveness of risk measurement approaches, and whether there may be an improved way of undertaking the assessments. Furthermore, this research can be used to highlight drawbacks and weaknesses of existing risk measurement, a measurement tool utilized in many industries throughout the world. This study has not been undertaken in the water sector and thus provides a novel glimpse into risk and decision making, particularly in relation to funding allocation in the public sector.

Technical Risk Measurement

The risk measurement approaches within the water industry in Australia are predominantly based on existing Australian standards for risk assessments [1]. The standards adopt a theory of assessment that is grounded in the technical risk approach, a theory which was conceived in the 1950s and 1960s through Starr’s work on risk [5]. The technical approach centers on the theory that risk exists, and that it can be measured objectively. The “rational” risk assessor underpins the basis of the theory, which leads to results that are objective and “true”. The probabilistic risk assessment framework is a form of the technical risk approach that is applied in risk assessments throughout the world. This framework, first utilized within the US Nuclear Agency for its report on Reactor Safety, quantifies risk through a method of first determining its likelihood (or probability) of occurring, to the magnitude of the consequence of the hazard [6]. The two figures are used to determine an overall risk score, with the use of a “risk matrix”, a table of risk ratings that have two inputs: likelihood and consequence. The report on Reactor Safety was published in 1975, and since this time, the risk matrix and its reliance on the technical risk theory has been utilized most widely in risk assessments worldwide.
Starr touches on social impacts within risk assessments within his theory, highlighting that they can, in fact, be measured quantitatively and in an objective manner. Significant refutation on this point stems from Self’s seminal piece on “Econocracy”, arguing that costing elements such as social good can be flawed and heavily biased [7]. Furthermore, it’s very attempt to quantify elements that cannot be easily quantified leads to an inherent risk in its use [8]. By selling the objectiveness of the technical risk process, Starr creates a process which ignores a key flaw in people undertaking the assessment—that they are arguably non-rational. The technical risk assessment’s reliance on the rational actor is predicated on the belief that it will produce decisions that are positivist and objective [9]. The assumption stands that regardless of the risk assessor, if they are rational, it should always result in the same positivist outcome. Much of the technical-scientific literature focuses on the issues of identification of risk, how this is calculated, together with the accurate nature of it [10]. In the last two decades, many theorists have presented criticisms of the technical risk measurement. In particular, this approach cannot easily quantify social effects of risks, as it relies heavily on the availability of legitimate statistical and fiscal data [11,12].
Risk assessments, in practice, diverge from the leaps and bounds made in the academic literature. Research in the past decade has described the impracticality and dangerous actions of utilizing risk matrices to measure hazards [2,13]. Despite these warnings, risk matrices are nevertheless still used and form a key tool of decision-making in the public sector in the water industry.
Aven [3] highlights the inherent issue in assessors viewing uncertainty and probability as much the same concepts, stating that the two are vastly different. Flage and Aven [14] also explore the inconsistency in the assessors understanding probabilities in general. This criticism extends further to value-based judgements, and the inability (or resistance) to relying on statistics and data to form the assessment.
A key element of the staying-power of risk matrices exists in its simplicity. As a tool that can be rolled out across an organization with diverse functions, while also allowing for a seemingly rational approach of quantifying risk, it is a tempting template to use. However, behind these risk measurement tools exist some degree of subjectivity, combined with the dangerous nature of hiding any risk aversion on the part of the assessor [15].
Decision-making will often carry unexpected, or unpredictable outcomes, and risk matrices may not be the most ideal way of measuring the risk. The practice of reducing two dimensions (consequence and likelihood) into one dimension, is rife across the public sector in Australia [16,17]. This carries its own issues in the assumptions that a likelihood score of 3 will be assessed as of equal weight as a consequence of the same value, despite measuring vastly different properties. Within the literature, many academics have devised alternatives to risk matrices [18,19]; however, the existing matrix approach still pervades industry practice.
Many other risk theories also exist that refute this approach, such as those based on the psychological, sociological, and cultural aspects, while also providing alternative viewpoints of the risk debate [20,21]. Psychological approaches to risk, such as the effect of cognitive bias on risk perceptions, has propagated within the literature, pushed by key theorist Paul Slovic [22]. Furthermore, sociological theories in “new risks” [8], and also cultural theories [23], have also taken a stand in explaining risk and risk behaviors. These are not commonly reflected in current uses of risk measurement approaches.
Other studies have been undertaken to show the language and rhetoric of risk in the water industry, these varying from many differing types of risk, such as reputation-based risk and safety-based risk [24]; however, a quantitative assessment on this process has not been yet undertaken. The research explored within this study considers the objective nature of existing risk assessments, and whether they can, in fact, be considered positivist, thus confirming previous similar studies in this area [2,4].

2. Materials and Methods

In-person surveys were undertaken at four metropolitan water authorities within Melbourne, Australia, resulting in a total of 77 respondents. Each participant was recruited through the water authority with two key prerequisites: They are water professionals who have decision-making authority on water projects, and that they have previously undertaken a risk assessment within their role.
Each participant was provided with seven fictional projects, of varied scales and type. The project descriptions are outlined in Table 1. The participants were required to use their existing organizational risk assessment framework (which was provided for reference) to determine a risk score for each project, as they would ordinarily do as part of conducting risk assessments. As all four water authorities have similar risk assessment processes, all are based on the same industry standard (ISO31000), they can be easily compared.
Each respondent provided risk scores from 0 to 25 (with the exception of Melbourne Water respondents, who provided a score from 0 to 10). The Melbourne Water scores were scaled up to ensure they were consistent with all other scores (i.e., they were scaled to fit within the 0 to 25 framework).
This risk score was formulated from two separate ratings: risk likelihood and risk consequence. Both scores ranged from 0 to 5, and were then multiplied to form the final risk score. There were also 4 risk ratings: low (1–5), medium (6–10), high (11–16), and extreme (16+). Scores for each project were assessed using the IBM SPSS program (a statistical package). When reporting on risk, these authorities typically report on the risk score on the basis of the risk matrix shown in Table 2, hence why this one-dimensional figure is utilized in this study.

2.1. Transforming Data

As the scores were made up of a multiplication of two factors, consequence and likelihood, it created a statistical anomaly that rendered the data more likely to be positively skewed. The data was taken as the square root of each figure, as this was conceptually appealing due to the multiplication of consequence and the risk equaling the geometric mean of the two. This transformation showed itself to not act as harshly upon the data as the alternative, a natural log transformation, and; therefore, was used in the analysis of the data.

2.2. Defining “Objective”

Objectivity in this context assumes that each risk assessor has the same risk assessment outcome when presented with the exact same information. The process was deemed as being objective if respondents reported scores that fell within the same risk rating (low, medium, high or extreme). All participants were provided with their organizational predefined risk assessment procedure and this was used to determine their risk score. The risk rating is ultimately one of the major decision-making mechanisms in funding allocations and options assessments within the water industry in Melbourne and; therefore, provided a score is within the same category, it can be said to not affect the outcome of the progression of a project drastically. This process tests the impact of the individual upon risk assessments and; therefore, any subjectivity that may arise from personal risk perceptions.
The risk categories are outlined in Table 3 Please note that due to the multiplication of consequence and likelihood scores (both out of 5), there were some risk scores that could not be obtained. For example, 17–19 were excluded from the table as they were not possible to obtain through the multiplication of two numbers between 1 and 5.

3. Results

The summary descriptives for the raw project data are shown in Table 4. This shows the data spliced by project, incorporating all organizations’ responses.
Each risk score was transformed using a square root function, as previously described. Upon transforming the data for Project 1, a few items became apparent. Primarily, the mean of the transformed data (2.49) was close to its median (2.45), indicating little skewness. The potential transformed scores could range from 1 through to 5. Within one standard deviation of the values, scores ranged between 1.778 and 3.214, while within two standard deviations provided a range of 1.06 and 3.93.
The histograms shown in Figure 1 show typical results of the spread of the risk scores from all organizations after undergoing the square root transformation. The data resulted in a more “normally”-distributed arrangement and; therefore, can be used to make further inferences. To confirm this, Q–Q plots were generated (refer to plots in Appendix A) and showed that the data generally adhered to a normal distribution.
The data ranged from a possible score of 1 to 5 after the transformation. The range of the standard deviation was determined, and then subsequently squared to “back-transform” the data, in order to give more meaningful results that fitted the risk scale. In the assessment of individual risk ratings of water professionals, as shown in Figure 2, the study exhibited a fair amount of variation between individuals. Each risk assessor was provided with the same information on each project as well as an identical risk assessment process for their organization and, yet, the scores varied wildly between each respondent. The first comparison was assessing Projects 1 to 4. These projects were all considered “familiar” and “business as usual” within the water sector, generally. As such, they were projects that each individual would have undertaken similar assessments for in the past.
Table 5 consolidates all projects, and their corresponding ranges, within one and two standard deviations of the mean. These ranges give an indication of the sheer spread of scores and are coupled with their corresponding risk ratings (noted below each score).
Considering the range that includes approximately 95% of the data, a project could receive a rating of low to extreme depending on who was undertaking the assessment. This essentially highlights that any risk rating within the range may be designated, depending on the assessor.

3.1. Risk Scoring within Organizations

The diverse range of ratings could arguably have been a result of grouping all the organizations together. To ensure that the diverse range of ratings was not an issue that results from inconsistency of scores between organizations, the researchers took a closer look at the data within each water authority to determine whether each organization had more consistency in scores within themselves, as each had slightly different risk assessment gradings. The risk score descriptives are shown, then, by organization in Table 6, Table 7, Table 8 and Table 9.
The difference between the projects themselves became apparent when considering the range within one standard deviation. Approximately 68% of the data fell within the range of low to high risk, whereas the same amount of data fell between medium and extreme for Project 3. The data continued to disperse even further when considering the range within two standard deviations. Every single project, within each organization, varied through the full range of risk rating options: from low to extreme. This highlights the inconsistent nature of the risk assessments. Some organizations, such as Organization 3, were more risk averse (e.g., ranging from medium—extreme within one standard deviation in Project 3), whereas others are less risk averse in other projects (e.g., Organization 4, Project 1). However, despite this, their ranges still did not change significantly when considering all of the responses.
In considering the standard deviation ranges above, it showed that within one standard deviation, all except Organization 2 ranged from low to medium/high, whereas within two standard deviations, we see that all water authorities ranged from low risk scores to high risk scores, with the exception of Organization 1, which ranged from low to extreme. This indicates that the risk assessment scores for Project 1 were highly variable, and were dependent upon the risk assessor. The above tables and charts highlight that the issues of subjectivity of risk scores is not an issue that is inherent in only one organization, but, rather, in all organizational processes.

3.2. Choice of Projects Impact upon Scores

We also considered whether the projects themselves, or the choice of fictional projects by the researcher, affected the risk scores, in acting as a factor in the variation. The projects were chosen based on differences in scale, cost, type (some are construction-based others are social), and amount of information provided. For every project, despite slightly differing ranges within one standard deviation, they all had the full range of risk outcomes, from low to extreme ratings, within 95% of the data in all the projects. This high variance in scores then, does not point to a difference in the type, scale, or cost of each project, but rather that the choice of risk assessor was the key factor in the change in scores.
This was the case for the “familiar” projects; however, the scores were slightly different when considering “new” or “unfamiliar” projects (Projects 4A, 4B, and 4C) (see Table 10).
Comparing the “unfamiliar” projects results to their more well-known counterparts, the range of the majority of the data was not dissimilar, it was merely scaled up in terms of its mean. The distribution was still wide, and highlights that, even if the projects themselves are changed, the variability of results does not.

4. Conclusions

Considering the range and wide distribution of risk assessment scores within this study, one cannot explicitly state that the risk matrix assessment process is objective. The risk rating is; thus, dependent upon the person who undertakes the assessment, despite the risk assessor being provided with identical information to other assessors and using the same organizational risk assessment process.
This finding provides a pathway to understanding decision-making within the water sector, and its role within risk assessment processes. The research implication is particularly intriguing as the risk assessment forms a key component in determining funding allocations for projects. A high-risk option may not be allocated funding over a low-risk scenario. Therefore, the process by which the risk assessor determines these ratings is influential in the allocation of funds. In many cases around the world, funds for water-based projects are sourced from taxpayers and; therefore, some level of scrutiny into how these funds are allocated is fair and reasonable in the public domain.
Understanding the role of risk assessments in a water project, and particularly its subjective nature, can provide a pathway to implementing new measurement approaches that create a less biased outcome. Further research is being undertaken to explore what the key elements are that separate each risk assessor’s scores from one another. Other risk theories, such as the psychological risk theory (a personal affiliation to a risk drives decision-making) and sociological risk theory (membership of a grid-group affects how each person rates a risk) could provide a pathway to understanding the way in which risk is perceived by each assessor [20]. This allows greater insight into how risk assessment processes may be altered to more effectively encompass the organizational risk appetite, with the aim of creating a more objectified practice.
The public rely on government experts to undertake a reliable risk analyses, for safety, sustainability, and planning, among other reasons. However, this study confirms what has been criticized previously [25]—that experts differ drastically when asked to quantify risks using their own organizational processes. This prompts questions of “who is right” and also the very nature of cognitive biases in shaping results.

Author Contributions

Conceptualization, A.K., B.D, and H.M.; methodology, A.K, B.D, and H.M.; software, A.K..; formal analysis, A.K.; investigation, A.K.; writing—original draft preparation, A.K..; writing—review and editing, A.K, B.D, and H.M.; supervision, B.D and H.M.

Funding

This work was co-funded by The University of Melbourne and Yarra Valley Water.

Acknowledgments

We acknowledge the guidance of the Melbourne Statistical Consulting Centre and Graham Hepworth for the statistical analysis of this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Q-Q Plots by Project

Figure A1. Normal Q-Q Plots by Project (a) Project 1 (b) Project 2 (c) Project 3 (d) Project 4 (e) Project 4A (f) Project 4B (g) Project 4C.
Figure A1. Normal Q-Q Plots by Project (a) Project 1 (b) Project 2 (c) Project 3 (d) Project 4 (e) Project 4A (f) Project 4B (g) Project 4C.
Sustainability 11 01279 g0a1aSustainability 11 01279 g0a1b

References

  1. Council of Standards Australia. Risk Management—Principles and Guidelines; Standards Australia: Sydney, Australia, 2009. [Google Scholar]
  2. Hubbard, D.; Evans, D. Problems with scoring methods and ordinal scales in risk assessment. IBM J. Res. Dev. 2010, 54, 2:1–2:10. [Google Scholar] [CrossRef] [Green Version]
  3. Aven, T. Improving risk characterisations in practical situations by highlighting knowledge aspects, with applications to risk matrices. Reliab. Engin. Syst. Saf. 2017, 167, 42–48. [Google Scholar] [CrossRef]
  4. Ball, D.J.; Watt, J. Further Thoughts on the Utility of Risk Matrices. Risk Anal. 2013, 33, 2068–2078. [Google Scholar] [CrossRef] [PubMed]
  5. Starr, C. Social Benefit versus Technological Risk. Science 1969, 165, 1232–1238. [Google Scholar] [CrossRef] [PubMed]
  6. United States Nuclear Regulatory Commission. Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants; United States Nuclear Regulatory Commission: Washington, DC, USA, 1975.
  7. Self, P. Econocrats and the Policy Process: The Politics and Philosophy of Cost-Benefit Analysis; Macmillan: London, UK, 1975. [Google Scholar]
  8. Beck, U. Risk Society: Towards a New Modernity; Sage Publications: London, UK, 1992. [Google Scholar]
  9. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior. [Electronic Resource]; Princeton classic editions; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  10. Lupton, D. Risk; Routledge: New York, NY, USA; Oxon, UK, 2013. [Google Scholar]
  11. Renn, O. Concepts of Risk: A Classification. In Social Theories of Risk; Praeger Publishers: Westport, CT, USA, 1992; pp. 53–82. [Google Scholar]
  12. Paté-Cornell, E. On “Black Swans” and “Perfect Storms”: Risk Analysis and Management When Statistics Are Not Enough. Risk Anal. 2012, 32, 1823–1833. [Google Scholar] [CrossRef] [PubMed]
  13. Pate-Cornell, E. Risk and Uncertainty Analysis in Government Safety Decisions. Risk Anal. 2002, 22, 633–646. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Flage, R.; Aven, T.; Zio, E.; Baraldi, P. Concerns, Challenges, and Directions of Development for the Issue of Representing Uncertainty in Risk Assessment. Risk Anal. 2014, 34, 1196–1207. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Monat, J.P.; Doremus, S. Deficiencies in and Alternatives to Heat Map Risk Matrices for Project Risk Prioritization. J. Mod. Proj. Manag. 2018, 6. [Google Scholar]
  16. Victorian Managed Insurance Authority. The Victorian Government Risk Management Framework Practice Guide; Victorian Managed Insurance Authority: Melbourne, Australia, 2016.
  17. Australian Government, Department of Industry, Innovation and Science. Quantification of Consequence/Likelihood Matrices. Available online: https://archive.industry.gov.au/resource/Programs/LPSD/Risk-management/Appendices/Appendix-1-Risk-analysis/Pages/Quantification-of-consequencelikelihood-matrices.aspx (accessed on 15 February 2019).
  18. Van Der Sluijs, J.P.; Craye, M.; Funtowicz, S.; Kloprogge, P.; Ravetz, J.; Risbey, J. Combining Quantitative and Qualitative Measures of Uncertainty in Model-Based Environmental Assessment: The NUSAP System. Risk Anal. 2005, 25, 481–492. [Google Scholar] [CrossRef] [PubMed]
  19. Ruan, X.; Yin, Z.; Frangopol, D.M. Risk Matrix Integrating Risk Attitudes Based on Utility Theory. Risk Anal. 2015, 35, 1437–1447. [Google Scholar] [CrossRef] [PubMed]
  20. Kosovac, A.; Davidson, B.; Malano, H.; Cook, J. The Varied Nature of Risk and Considerations for the Water Industry: A Review of the Literature. Environ. Nat. Resour. Res. 2017, 7, 80–86. [Google Scholar] [CrossRef]
  21. Thompson, M.; Ellis, R.; Wildavsky, A. Cultural Theory (Political Cultures Series); Westview Press: Boulder, CO, USA, 1990. [Google Scholar]
  22. Slovic, P. Perception of Risk. In Social Theories of Risk; Praeger Publisher: Westport, CT, USA, 1992; pp. 117–153. [Google Scholar]
  23. Douglas, M.; Wildavsky, A.B. Risk and Culture. [Electronic Resource]: An Essay on the Selection of Technological and Environmental Dangers; University of California Press: Berkeley, CA, USA, 1982. [Google Scholar]
  24. Kosovac, A.; Hurlimann, A.; Davidson, B. Water Experts’ Perception of Risk for New and Unfamiliar Water Projects. Water 2017, 9, 976. [Google Scholar] [CrossRef]
  25. Rae, A.; Alexander, R. Forecasts or fortune-telling: When are expert judgements of safety risk valid? Saf. Sci. 2017, 99, 156–165. [Google Scholar] [CrossRef]
Figure 1. Risk scores (square root transformation) for two typical projects, including normal distribution curve: (a) Histogram for Project 1; and (b) histogram for Project 2.
Figure 1. Risk scores (square root transformation) for two typical projects, including normal distribution curve: (a) Histogram for Project 1; and (b) histogram for Project 2.
Sustainability 11 01279 g001aSustainability 11 01279 g001b
Figure 2. Histograms of risk scores (transformed) by project. Bar color indicates risk rating: blue = low; yellow = medium; orange = high; red = extreme. (a) Project 1; (b) Project 2; (c) Project 3; (d) Project 4 (e); Project 4A; (f) Project 4B; and (g) Project 4C.
Figure 2. Histograms of risk scores (transformed) by project. Bar color indicates risk rating: blue = low; yellow = medium; orange = high; red = extreme. (a) Project 1; (b) Project 2; (c) Project 3; (d) Project 4 (e); Project 4A; (f) Project 4B; and (g) Project 4C.
Sustainability 11 01279 g002aSustainability 11 01279 g002b
Table 1. Description of the projects in the survey.
Table 1. Description of the projects in the survey.
Fictional Projects in SurveyBrief Description
1 (Familiar project)Pipe replacement along a busy road
2 (Familiar project)Construction of a new water pump station
3 (Familiar project)Construction of a recycled water treatment plant
4 (Familiar project)Public campaign for water conservation
4A (Unfamiliar project)Creating recycled water for potable uses
4B (Unfamiliar project)Implementation of a new radiation-based water treatment method
4C (Unfamiliar project)Removal of fluoride dosing from existing potable water supply
Table 2. Risk matrix of water authorities in the study (F. Portelli, personal communication, 26 June 2018).
Table 2. Risk matrix of water authorities in the study (F. Portelli, personal communication, 26 June 2018).
Consequence/Likelihood1
(LOW CONSEQUENCE)
2345
(HIGH CONSEQUENCE)
1
(VERY UNLIKELY)
1
(LOW)
2
(LOW)
3
(LOW)
4
(LOW)
5
(LOW)
22
(LOW)
4
(LOW)
6
(MED)
8
(MED)
10
(MED)
33
(LOW)
6
(MED)
9
(MED)
12
(HIGH)
15
(HIGH)
44
(LOW)
8
(MED)
12
(HIGH)
16
(HIGH)
20
(EXTR)
5
(HIGHLY LIKELY)
5
(LOW)
10
(MED)
15
(HIGH)
20
(EXTR)
25
(EXTR)
Table 3. Risk rating score ranges.
Table 3. Risk rating score ranges.
Risk RatingRange from (Inclusive)Range to (Inclusive)
Low14
Medium59
High1016
Extreme2025
Table 4. Summary statistical descriptives for survey data by project.
Table 4. Summary statistical descriptives for survey data by project.
ProjectN StatisticRange StatisticMinimum Risk ScoreMaximum Risk ScoreMean Risk Score
177191206.74
277191209.96
3772322510.65
477191208.18
4A762322513.70
4B762412512.63
4C732412510.18
Note: “N” refers to the number of data points.
Table 5. Scores by project (back-transformed) within one and two standard deviations.
Table 5. Scores by project (back-transformed) within one and two standard deviations.
Project No.Range within 1 SD (~68%)Range within 2 SD (~95%)
Project 13.1610.331.1215.44
LOWHIGHLOWHIGH
Project 24.7815.161.7822.53
LOWHIGHLOWEXTREME
Project 35.3515.962.1723.39
MEDIUMHIGHLOWEXTREME
Project 42.9713.410.5721.44
LOWHIGHLOWEXTREME
Project 4A7.2620.163.2229.03
MEDIUMEXTREMELOWEXTREME
Project 4B6.5618.722.8227.14
MEDIUMEXTREMELOWEXTREME
Project 4C4.2416.141.1624.98
LOWHIGHLOWEXTREME
Table 6. Project 1 risk scores by organization.
Table 6. Project 1 risk scores by organization.
Project 1—Pipe ReplacementRange within 1 Standard Deviation (~68% of Data)Range within 2 Standard Deviations (~95% of Data)
Organization 13.8
LOW
13.1
HIGH
1.3
LOW
19.8
EXTREME
Organization 24.1
LOW
9.7
MEDIUM
2.1
LOW
13.5
HIGH
Organization 32.8
LOW
10.2
HIGH
0.8
LOW
15.6
HIGH
Organization 42.8
LOW
9.6
MEDIUM
0.9
LOW
14.6
HIGH
Table 7. Project 2 risk scores by organization.
Table 7. Project 2 risk scores by organization.
Project 2—Pump Station InstallationRange within 1 Standard Deviation (~68% of Data)Range within 2 Standard Deviations (~95% of Data)
Organization 14.1
LOW
18.2
EXTREME
0.8
LOW
28.9
EXTREME
Organization 24.9
LOW
15.7
HIGH
1.8
LOW
23.4
EXTREME
Organization 36.7
MEDIUM
16.8
EXTREME
3.3
LOW
23.6
EXTREME
Organization 44.6
LOW
13.1
HIGH
2.0
LOW
19.0
EXTREME
Table 8. Project 3 risk scores by organization.
Table 8. Project 3 risk scores by organization.
Project 3—Construct Sewage Treatment Plant/Recycled Water Treatment PlantRange within 1 Standard Deviation (~68% of Data)Range within 2 Standard Deviations (~95% of Data)
Organization 15.2
MEDIUM
19.3
EXTREME
1.5
LOW
29.6
EXTREME
Organization 25.4
MEDIUM
14.3
HIGH
2.6
LOW
20.2
EXTREME
Organization 35.5
MEDIUM
18.6
EXTREME
1.9
LOW
28.1
EXTREME
Organization 45.6
MEDIUM
14.7
HIGH
2.6
LOW
20.8
EXTREME
Table 9. Project 4 risk scores by organization.
Table 9. Project 4 risk scores by organization.
Project 4—Save Water CampaignRange within 1 Standard Deviation (~68% of Data)Range within 2 Standard Deviations (~95% of Data)
Organization 12.3
LOW
11.9
HIGH
0.3
LOW
19.4
EXTREME
Organization 23.6
LOW
13.9
HIGH
1.0
LOW
21.5
EXTREME
Organization 34.2
LOW
15.6
HIGH
1.2
LOW
24.1
EXTREME
Organization 42.7
LOW
13.3
HIGH
0.4
LOW
21.6
EXTREME
Table 10. Project 4A–4C risk scores.
Table 10. Project 4A–4C risk scores.
“Unfamiliar” ProjectsRange within 1 Standard Deviation (~68% of Data)Range within 2 Standard Deviations (~95% of Data)
Project 4A—Using Recycled Water as Potable7.3
MEDIUM
20.2
EXTREME
3.2
LOW
29.0
EXTREME
Project 4B—Using Radiation in Treatment of Drinking Water6.6
MEDIUM
18.7
EXTREME
2.8
LOW
27.1
EXTREME
Project 4C—Removing Fluoride from Drinking Water Supply4.2
LOW
16.1
HIGH
1.2
LOW
25.0
EXTREME

Share and Cite

MDPI and ACS Style

Kosovac, A.; Davidson, B.; Malano, H. Are We Objective? A Study into the Effectiveness of Risk Measurement in the Water Industry. Sustainability 2019, 11, 1279. https://doi.org/10.3390/su11051279

AMA Style

Kosovac A, Davidson B, Malano H. Are We Objective? A Study into the Effectiveness of Risk Measurement in the Water Industry. Sustainability. 2019; 11(5):1279. https://doi.org/10.3390/su11051279

Chicago/Turabian Style

Kosovac, Anna, Brian Davidson, and Hector Malano. 2019. "Are We Objective? A Study into the Effectiveness of Risk Measurement in the Water Industry" Sustainability 11, no. 5: 1279. https://doi.org/10.3390/su11051279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop