Keywords

1 Introduction

There is now significant evidence that systems accident analysis (SAA) methods are required to understand the incidents that occur during “led” (i.e. facilitated or instructed) outdoor activities [13]. These methods are underpinned by the idea that safety in sociotechnical systems is impacted by decisions and actions made at all levels of that system, not just by human operators working at the so-called ‘sharp-end’. Therefore, accidents are caused by multiple factors that go beyond the immediate context of the incident itself. Accidents and safety are described as emergent properties arising from the interactions of components within a system [4, 5]. Studies which have applied SAA methods (i.e. STAMP and Accimap) to the analysis of both fatal [1, 2] and relatively minor injury-causing incidents [3] have demonstrated that these principles also apply to the led outdoor activity domain. Moreover, SAA methods provide a deeper understanding of how interactions across the system contribute to hazardous conditions and unsafe behaviour during led outdoor activities, compared to other “root cause” analysis methods that have been developed specifically for the domain [1].

Despite the proposed advantages of SAA for understanding incident-causation in this domain, this approach has not yet become common practice. This reflects a wider research practice gap whereby researchers are applying more advanced methodologies than practitioners [6]. The gap is a significant issue in many safety critical domains [6]. Researchers have demonstrated the applicability of SAA, and its advantages over non-systemic methods, in a wide range of safety critical domains, including space exploration [7], aviation [8], rail [9], public health [10], disaster management [11] and road freight transport [12, 13]. In practice, most investigations are still underpinned by linear chain-of-event models [5, 14]. According to Leveson [5], these models oversimplify accident processes, and cannot represent situations where accidents are caused by interactions between components, rather than individual component failures. Underwood and Waterson [6] identified a number of barriers preventing the adoption and usage of SAA methods by practitioners, including a lack of awareness, lack of training opportunities, accessibility and lack communication of information, usability, resource constraints, and concerns over the reliability and validity of SAA methods.

The Understanding and Preventing Led Outdoor Accidents Data System (UPLOADS) was designed to address this research-practice gap, by providing a series of tools that risk managers can use to collect and analyse led outdoor activity incident data. UPLOADS consists of incident reporting, storage, coding and analysis methods that formalise the application of Rasmussen’s [4] risk management framework (RRMF) and its associated Accimap technique in this domain. A software tool and incident reporting form support the collection of detailed information on the events leading up to, and during incidents, from the perspective of those involved in the incident (i.e. activity leaders) and those involved in activity planning and organisational management (i.e. managers). Managers then enter these reports into a database within the software tool. After entering each report, managers are prompted to identify and code the causal factors, and relationships between them, present within each report using the UPLOADS accident analysis method (described below). The software tool then produces diagrammatic representations of the system of factors involved in individual, and aggregate, incidents (presented in the form of Accimaps). Tables with descriptions of the contributing factors and relationships identified support the diagrams. Video and paper-based training material has also been developed to explain the underpinning theory and support each stage of this process. The system evaluated in this paper was a refined version developed after an initial six month trial and evaluation of the prototype system [15].

The UPLOADS accident analysis method consists of a framework for representing the system of factors involved in incidents and a taxonomy for populating the framework. In a series of previous studies [2, 3], the RRMF was adapted to describe the “led outdoor activity system” as a hierarchy across multiple levels: equipment, environment and meteorological conditions; decisions and actions of leaders, participants and other actors at the scene of the incident; supervisory and management decisions and actions; activity centre management, planning and budgeting, local area government, schools and parents, regulatory bodies and associations; and government department decisions and actions. The taxonomy consists of two levels of categories. The first level describes the actors (e.g. activity participants, activity leaders, field managers, schools, parents etc.), artefacts (e.g. equipment) and activity context. The second level describes specific contributing factors relating to each of these components. The taxonomy was developed and refined in a series of previous studies [3, 16], and has initially shown reasonable levels of inter-rater reliability [17]. The taxonomy is intended to guide both the collection and analysis of incident data, allow for analysis across multiple incidents, and help ensure the reliability of the method [2, 6].

As noted, concerns around the reliability and validity of SAA methods are a key factor in their lack of uptake by practitioners. Validity testing is a critical but often overlooked part of human factors methods development and implementation [18]. One aspect of testing the validity of a method involves evaluating validity by testing whether end users are able to generate analyses that are accurate compared to a criterion, such as an expert panel’s analysis [19]. Such testing ensures that, when implemented in practice, the method is able to be used as intended.

The aim of this study is to evaluate the validity of UPLOADS by comparing analyses generated by risk managers to those generated by researchers experienced in SAA. This study specifically focuses on the types of contributing factors that are identified from incident reports by risk managers as opposed to researchers.

2 Method

2.1 Design

The study was a prospective trial. It involved participants using UPLOADS to collect and analyze incident data within their organization over a three month period (June to August 2014). The study was approved by the University of the Sunshine Coast Human Ethics Committee.

2.2 Recruitment

Organizations were invited to participate in the trial via outdoor education and recreation peak body and professional membership association newsletters. Interested organizations were asked to invite a senior staff member in a safety-related role to participate in the study. That person was responsible for entering all incident reports; analyzing and managing the data; and providing training to other staff on reporting incidents. Forty three organizations volunteered to be involved in the study.

2.3 Materials

UPLOADS includes: paper-based incident report forms; a Software Tool for collecting, coding and analyzing data; an incident severity scale; and video and paper-based training.

Paper-Based Incident Report Forms. The fields in Table 1 were presented in the format of a paper-based incident report form, so that activity leaders (i.e. reporters) could easily report incidents. The paper-based version included an incident severity scale and a taxonomy of potential contributing factors.

Table 1. Information captured by the incident database within the software tool

Software Tool. The software was developed in FileMaker Pro 12 and Java. The software consisted of: five linked databases for collecting data (incidents; staff; clients; and participation); a tool for classifying contributing factors and relationships between them as identified in incident reports; a tool for summarizing the contributing factor and relationships data collected; and a tool for exporting deidentified data (e.g. names removed) to send to the research team.

Data Collection Tools. The Incident Database captured the information shown in Table 1. It was structured to record both near misses and incidents associated with adverse outcomes. A “near miss” was defined as a serious error or mishap that had the potential to cause an adverse event but failed to do so because of chance or because it is intercepted. An “adverse outcome” was defined as any injury or illness. Participants were instructed to record any near misses rated as 2 or above on the severity scale (i.e. an incident where the potential outcome to risks could cause moderate injuries or illnesses) and any adverse outcome rated as 1 or above (i.e. required localized care (non- evacuation) with short term effects).

Procedure for Analyzing Incident Reports. After incident reports had been entered into the software, the program prompted participants to identify and classify the contributing factors, and relationships, from the incident report. When identifying contributing factors, participants were instructed not to speculate beyond the information provided in the incident report. Classifying the contributing factors involved manually entering a description of each contributing factor identified from the report into a list. For each description, participants were then prompted to select a Level 1 code from a drop down list that best described that factor, and if possible, a corresponding Level 2 factor. Classifying the relationships also involved entering a description of each relationship identified in the report into a list. For each description, participants were then prompted to choose a pair of codes (identified when classifying the contributing factors), that best described that relationship. There was no limit on the number of contributing factors or relationships that could be entered.

Once this information had been entered, summary analyses of individual and aggregate incidents could be produced. This involved performing a search to identify a set of incidents based on any field or combination of fields within the database (e.g. all incidents, or all incidents associated with injuries and kayaking). The software then generated a diagrammatic representation of the factors and relationships identified in these reports and summary tables listing each code, the frequency of that code and the descriptions associated with it.

Training Material. The training material consisted of: a manual explaining the accident causation model underpinning the system and how to collect data about incidents; a manual describing how to use the UPLOADS software; videos demonstrating how to use each component of the software; and a PowerPoint presentation for staff explaining the details required for the incident report forms.

2.4 Procedure

On contacting the research team, organizations and participants were asked to provide written consent to participate in the study. Participants were then sent a link to a demographics questionnaire presented on Survey Monkey. Once completed, participants were sent an email with instructions describing: how to download all the study materials from DropBox; the type of incidents to collect; and dates for submitting data to the research team. In addition, the email invited participants to contact the research team via phone or email if they had any questions or required any help.

Data Analysis. All data collected was merged into a central database. The set of incident reports analyzed by participants was then identified and extracted. These reports contained a list of contributing factors identified by participants, and each of the factors was associated with a Level 1 and Level 2 code from the taxonomy.

To compare the factors identified by participants and researchers, two researchers identified the contributing factors from each report (see Table 1). One researcher was highly experienced in SAA, and one was highly experienced in using UPLOADS and its taxonomy. Once the contributing factors had been identified, the researchers then discussed any discrepancies between the analyses and reached a consensus. Each researcher then classified the factors identified, and again discussed the coding until consensus was reached on all codes selected.

The researchers then classified the contributing factors identified by participants. Again the coding was discussed until consensus was reached on all codes selected. An aggregate Accimap was then constructed using the UPLOADS accident analysis framework [17], comparing the factors identified by participants and researchers agreed correct answers.

3 Results

3.1 Sample

In total, 23 participants used UPLOADS and sent the data to the research team. Thirteen had analyzed and coded reports. This represents a 53 % response rate for using the incident reporting component of the software tool, and 30 % response rate for using the SAA tools.

Of the participants who analyzed and coded reports, 7 were male and 6 female. Two participants were 25 to 34 years; eight were 35 to 44 years; two were 45 to 54 years; and one was 55 to 64 years. All held a management role within the organization, and 9 led outdoor activities as part of their role.

3.2 Overview of Data Collected

Participants analyzed and coded 104 incident reports out of a total 226 reports. Of the participants who analyzed reports, on average they analyzed and coded 92 % of the reports that they themselves had collected (range 36 % to 100 %).

One hundred and twenty six reports were associated with adverse outcomes, and 14 with near misses. The median rated actual severity of the reports was 1 (range 0 to 5), indicating that the majority of incidents resulted in only minor injuries or illnesses. The median rated potential severity of the reports was 2 (range 1 to 6), indicating that the majority of incidents had a potential for moderate injuries or illnesses.

3.3 Comparison of Contributing Factors Identified from Incident Reports

The median number of factors identified by participants per report was 2 (range 1 to 4). The median number of factors identified by researchers per report was 3 (range 1 to 11). Researchers identified all of the factors that were identified by participants. However, participants identified only half (53 %) of the contributing factors identified by researchers.

Figure 1 shows a comparison between the contributing factors identified from the reports by researchers and participants, classified according to the UPLOADS accident analysis framework. In total, researchers identified 51 types of contributing factors; participants identified 40. The contributing factors that participants had most difficulty identifying were related to the Activity Group and Higher-level management. Factors relating to Higher-level management were present in very few of the reports provided by participants.

Fig. 1.
figure 1

Contributing factors identified from the incident reports by researchers compared to participants, classified according to the UPLOADS Accident Analysis Framework. Numbers in brackets represent: number of incidents where the factor was identified by researcher; Number of incidents where the factor was identified by participant; and percent agreement. Factors with greater than 75 % agreement are shaded grey.

The level of agreement between researchers and participants varied considerably according to the type of contributing factor. Agreement was highest for factors relating to “other people in the activity group” (Mean level of agreement = 83 %), “activity participant” (Mean level of agreement = 77 %) and “activity environment” (Mean level of agreement = 65 %). Agreement was lowest for factors relating to “higher-level management” (Mean level of agreement = 21 %), “parents/carers” (Mean level of agreement = 25 %) and the “activity group” (Mean level of agreement = 30 %).

4 Discussion

The aim of this study was to evaluate the concurrent validity of UPLOADS by comparing analyses generated by practitioners to those generated by researchers experienced in SAA, The findings showed that participants identified only half the factors identified by researchers from incident reports. Participants tended to focus on only one or two factors as the primary causes of each incident, indicating a lack of systems thinking and general awareness regarding accident causation. Overall, the findings indicate that UPLOADS provides only a partial bridge over the research-practice gap, as the analyses generated by practitioners may only provide only a partial picture of the factors contributing to accident causation in led outdoor activities.

The question is whether this level of validity is sufficient to justify the use of UPLOADS within organizations? A “valid” tool achieves the purpose for which it was designed (Stanton & Young, 2003). The purpose of UPLOADS is to support organizations in developing a deeper understanding of accident causation during led outdoor activities and therefore identify more appropriate and effective countermeasures.

The study provides some evidence that UPLOADS, as it is currently used, supports a deeper understanding of accident causation. Prior to the development of UPLOADS, a survey of Australian outdoor activity providers found that only half had incident databases [20]. Thus, at the very least, organizations that use UPLOADS will benefit by being able to track incident rates and identify trends over time, even if the full causal picture may not be forthcoming. Secondly, other incident reporting systems for this domain either do not support the identification of contributing factors [20], or include taxonomies that are limited to factors relating to activity leaders, participant, equipment and the environment [3]. The findings show that participants identified higher-level factors in at least some reports. Therefore, while they may not have a complete picture of accident causation in their organization, they potentially have an enhanced understanding compared to not using an incident database at all, or using any of the other incident reporting systems that have been developed for the domain. Moreover, it is likely that continued use of UPLOADS and exposure to its SAA method and taxonomy will enable practitioners to develop a deeper understanding of the role of systemic factors in accident causation, which in turn should lead to improvements in the data.

One concern is that the current outputs may not be sufficient to support the development of more appropriate and effective countermeasures. From a systems perspective, appropriate and effective countermeasures need to target the higher-level factors that contribute to hazardous conditions and unsafe behavior [4]. The findings generally indicate that based on their own analyses, participants are likely to focus on only one or two factors, such as hazardous conditions and unsafe behaviours by participants and activity leaders. Thus, risk managers are unlikely to be able to identify the systemic network of factors that contribute to such factors. This suggests that the way that managers currently use UPLOADS is insufficient to support the identification of more appropriate and effective countermeasures. One approach that has proven useful in other areas is to use expert panels to analyze the data provided by practitioners and also to subsequently identify countermeasures. However, in this case the amount of data is prohibitive to this approach. Rather, it may be that practitioners need more training in SAA to help them identify multiple factors from reports. This could potentially involve the development of interactive training videos which explain the approach, as at the moment the theoretically underpinnings are only explained in written documentation.

However, it should also be noted that in the current study, the period of exposure to UPLOADS and SAA concepts was very short (3 months). It has taken practitioners in domains such as aviation many years to become proficient systems thinkers. This suggests that different results may be obtained once UPLOADS becomes widely adopted across the sector for a number of years.

Finally, the limitations of the study and directions for future research should be considered. Firstly, a key aspect of SAA is the identification of relationships between factors; this aspect of the UPLOADS was not considered in this study. The evaluation of this aspect may shed more light on the countermeasures that practitioners using the tool are likely to produce. Secondly, participants with few incidents to report had less opportunity to interact with UPLOADS than those who had more incidents. While this was a consequence of the naturalistic study design, it would be useful to evaluate the validity of UPLOADS under conditions where all participants had to analyze the same incident reports. Potentially, this would allow insights into how technical aspects of the tool could be improved. Third and finally, training was self-directed (i.e. participants could choose which of the training materials they viewed). This may partially explain why so few organizations used the coding and analysis tools. While face-to-face training would undoubtedly have produced better results, the training approach was guided by cost considerations, which need to be kept as low as possible to support Australia-wide implementation.

In conclusion, it appears that implementation of UPLOADS system has slightly narrowed the gap between accident analysis research and practice in the outdoor activity domain. Practitioners now have the opportunity to report and analyse accidents in line with systems thinking, applying state of the art SAA methods. However, the findings show that practitioner’s understanding of accident causation in this domain remains limited. More work is required that targets the usability of the tool and practitioner’s understanding of SAA. This work needs to be undertaken in tandem to ensure that the use of the tool ultimately results the identification of more appropriate and effective countermeasures to enhance accident prevention in this domain.