Introduction

Effective conservation planning at the regional scale poses well-documented challenges to policy makers, resource managers, and scientists alike (Johnson et al. 1999). Planners who confront management obligations that target complex and layered ecological phenomena must navigate multiple statutory authorities and regulations, grapple with trade-offs among conservation objectives, and integrate diverse stakeholder involvement (Greig et al. 2013). Where species listed under the federal Endangered Species Act enter the equation, the complications that attend management often are multiplied (McFadden et al. 2011). In such circumstances, uncertainties regarding the needs of target species can overwhelm the management agenda, and adaptive management may be selected by default as the primary means of bringing knowledge to conservation planning. But while adaptive management can be an effective means to “learn while doing,” it complements, but does replace, a structured approach to the selection of management actions that uses the best available scientific information.

Impediments to identifying management actions and analyzing the prospects for their success most frequently include insufficient data and understanding regarding the factors that affect the survival and reproduction of a targeted species and the absence of analytical tools tailored to those species’ distinct life histories. These shortcomings frequently are compounded by a lack of critical analysis of available scientific information, incomplete presentation of information, or misinterpretation or misrepresentation of information (Murphy and Weiland 2011). Attempts to meet these challenges often fail to produce effective management actions (Gunderson and Light 2006; Keith et al. 2010; Susskind et al. 2012). But, perhaps more concerning than past failures is the fact that resource managers and scientists—daunted by the task of making scientifically defensible decisions—have adopted the habit of defaulting from engaging empirical research, monitoring results, and modeling in their planning efforts, and instead falling back on informed intuition—which may be positively cast as “best professional judgment,” or negatively cast as “speculation” or “surmise” (Ruckelshaus et al. 2002).

A contributing factor to over-reliance on intuition has been the invocation of a reductionist conception of adaptive management in lieu of a step-wise, structured process for acquiring scientific information and integrating it into resource management decisions. At this point, more than 30 years after the term adaptive management began to appear in the scientific literature (for example, Holling 1978), many federal and state resource agencies nominally have integrated adaptive management with their other core functions. Federal and state resource managers, who tacitly accept the notion that an initial management action will not produce the exact desired conservation outcome, presume that adapting or adjusting the same action might well provide the palliative. Not explicitly recognized with that attractive notion, however, is that a management action that is misinformed or misdirected is unlikely fit into an adaptive framework. Incremental adjustments to an ineffective management action will inevitably yield a management program that does not meet performance goals—a circumstance that can come with high societal costs and dubious ecological benefits. For example, if the limiting factor on the population growth of a salmon species is, say, the amount of available spawning habitat, then investment in and repeated adjustments to a predation-control management action well could yield no discernible benefits for the species.

To avoid this undesirable outcome, it is essential to implement adaptive management as a step-wise, structured approach incorporating scientific information into decision making (consider Walters 1997, National Research Council 2009; Gregory et al. 2012). While the operative term in adaptive management is “management,” for the term “adaptive” to apply, the best available scientific information must serve as the basis for management decisions. It has been long recognized that a structured approach is essential to adaptive management (Holling 1978; Walters and Holling 1990); however, in practice there has been a propensity to dispense with rigorous application of analytical procedures in conservation planning, particularly during the process of developing and identifying candidate management actions, and selecting from among them an alternative for programmatic implementation. The tendency to default to judgment in adaptive management, a process touted for its reliance on well-informed, science-based decision making, is routinely overlooked or unacknowledged.

Adaptive management, typically represented in a simplified circular figure composed of six steps, give or take a step or two, has introduced structure into conservation planning. For example, the U.S. Department of the Interior’s technical guidance on adaptive management includes a six-step adaptive-management figure (Fig. 1). Such figures serve a valuable function by allowing their authors to convey an abstract and complex concept in a manner that is readily understood by policy makers, resource managers, and the public at large. Together with catch phrases and terms such as “learning by doing” and “plan, act, evaluate,” the adaptive-management cycle has made the concept accessible to a broad audience. Six-step adaptive-management frameworks are the most prevalent in the literature; the cycle portrayed in Fig. 1 seems to be the most commonly presented.

Fig. 1
figure 1

The adaptive-management cycle as set out in the U.S. Department of the Interior guidance. Source: Williams et al. 2009

A framework with fewer steps is presented by the U.S. Forest Service in a document pertaining to adaptive management (Fig. 2). The skeletal nature of the description is supplemented by a four-item list of necessary inputs into the process at both its front end before an initial management action is settled upon and at the close of the adaptive-management cycle after the action has been implemented, monitoring data have been gathered, and the action has been evaluated using those data and available analytical tools. Process models such as these are conspicuous in the absence of even rudimentary detail in their graphical presentation; most of them offer little explicit accompanying context, and none shed light on the depth and complexity of the “design” component of adaptive-management efforts.

Fig. 2
figure 2

The four-step adaptive-management cycle. Source: Stankey et al. 2005

A number of adaptive-management cycles do set forth the concept with somewhat greater detail and specificity. For example, a consortium of conservation organizations referred to as the Conservation Measures Partnership has developed a five-step adaptive-management framework, which sets out three to four actions that must occur at each step of the cycle (Fig. 3). Whereas the Department of the Interior (in Fig. 1) uses just a one- to two-word phrase to convey ideas, the Conservation Measures Partnership is able to convey more information by embedding longer descriptive phrases and articulating multiple activities necessarily engaged in each step.

Fig. 3
figure 3

The five-step adaptive-management cycle. Source: Conservation Measures Partnership (2013)

Graphical representations of adaptive management are easy to comprehend and likely have fostered the proliferation of the concept. But these overly simplistic, almost cartoonish, representations of adaptive management are inevitably a far cry from reality. They constitute useful heuristic devices, but they lack either precision or specificity. This led the editors of a recent book on the subject to state that “there is a disquieting sense that adaptive management has become little more than a rhetorical notion, constructed more by assertion than by demonstration” (Stankey and Allan 2009).

Adumbrated representations of the concept of adaptive management cannot convey the rigor necessary to approach the task of designing and implementing adaptive management in order to achieve even a modicum of programmatic success. Among the many salient, missing details is any explicit reference to the scientific pursuits necessary to identify candidate management actions, select from among them those to be implemented, and establish a means of assessment by which management can be adapted. Accordingly, the commonly offered shorthand illustrations likely contribute to a tendency by policy makers and resource managers to underestimate the time, expense, and institutional capacity needed to implement adaptive management (Allan and Stankey 2009).

To the extent that federal and state resource agencies (and scholars for that matter) have set out a structured adaptive-management process, generally they have done so after, rather than before, an initial decision has been made to pursue one or more management actions. Almost without exception, adaptive-management plans and programs have given relatively little attention to the structured process that is necessary to identify programmatic management actions and select from among them an action or actions for implementation. This may be the consequence of a focus on the adaptive component of adaptive management, which places emphasis on the tail end of the cycle where learning and adaptation are expected to occur following evaluation of monitoring data. The Department of the Interior notes, in its technical guidance on the subject, that many practitioners have the misconception that “monitoring activities and occasionally changing them” constitutes adaptive management (Williams et al. 2009). A cabined understanding of adaptive management may be reinforced by the near-exclusive focus in the published literature on learning during implementation and adaptation based on that learning (Allen et al. 2011). But inattention to any one of the obligatory, sequential, procedural steps that precede the actual implementation of management, when “learning” ostensibly occurs, greatly increases the likelihood of program failure.

A number of approaches to decision making set in the context of natural resources management advocate a structured, transparent process (EPA 2003; National Research Council 2009; Murphy and Weiland 2011). Use of a structured process may yield more defensible and efficacious decisions by assuring that pertinent scientific information is gathered, critically assessed, and integrated into the process of making decisions using conceptual and operational models. Where such a process is not utilized to arrive at initial management actions, those actions are liable to fail to achieve desired outcomes, and subsequent efforts to adhere to an adaptive-management framework also frequently will fail to meet expectations. As the federal wildlife agencies have stated, “adaptive management should not be used in place of developing good up-front conservation measures or to postpone difficult issues” (FWS and NOAA 2000). For example, if a management action is premised on an assumed relationship between a target species and some substitute species or surrogate measure (see Caro 2010), and the proxy relationship is not actually valid, then both the action and subsequent efforts to monitor its effectiveness will be compromised.

A structured adaptive-management process must perforce be initiated with a problem-formulation exercise, then proceed through the selection and implementation of (initial) management actions together with the design of an associated monitoring scheme (Fig. 4). The first nine steps in that process provide groundwork for the evaluation of alternative management actions and the selection of an action from among them. These steps constitute effects analysis in the context of inter-agency consultation under the Endangered Species Act (Murphy and Weiland 2011) and are referred to as risk assessment in other contexts (EPA 2003; NRC 2009). The process can be expected to yield suboptimal results if either the steps are not taken in sequence, or if one or more of the steps is not carried out in a technically rigorous manner by analysts with adequate training, time, and resources.

Fig. 4
figure 4

Requisite steps in the selection of management actions that are to be carried out in an adaptive framework. Process steps that rely on input from scientists are shaded; those that primarily are the prerogative of resources managers are unshaded. In implementation, the process is inevitably less linear than depicted; feedback loops between sequential boxes (step pairs) can occur throughout

The process begins with a needs assessment and an exercise in establishing goals and objectives; this first step is often referred to as the problem formulation or definition stage of adaptive management. The next two steps involve collection and critical assessment of data, analyses, and findings from individual studies and research efforts, then synthesis of that scientific information across such studies and research efforts. At this stage, it is essential to assess the reliability of the collected scientific information and acknowledge attendant uncertainty. The fourth step is the point at which it is necessary to set out a conceptual model that describes, schematically, how the ecological system—within which the target species resides—functions. As such, it conveys response variables and covariates (which may be physical or biological) as well as the relationships among them.

In most adaptive-management graphics, the science-informed process of selecting from among candidate management actions (steps five through ten in Fig. 4) is often presumed to have been carried out, rather than explicitly implemented as a requisite set of steps that sequentially confront alternative (candidate) management actions with “best science.” The multi-step process of selecting a management action to be implemented in an adaptive framework can involve the testing of basic hypotheses that address cause-and-effect relationships between targeted species and environmental stressors, the use of quantitative methods that assess weights of evidence for multiple competing theories, as well as applying sophisticated analytical and modeling techniques from the field of population biology (Burnham and Anderson 2002).

As acknowledged in the four- to six-step adaptive-management cycles, the selection of a management action does not terminate the input of scientists into the adaptive-management process (Fig. 5). The implementation of the management action is coupled with initiation of monitoring, the gathering of data on pertinent ecological factors in an experimental or quasi-experimental framework in order to allow for assessment of the effectiveness and efficacy of the management action. Data from monitoring, its analysis, and its interpretation by technical experts may lead decision makers to reconsider the conceptual model that links the species and other targeted resources, baseline environmental conditions, and management opportunities, or recalibrate the operational model that quantifies their relationships, or continue to implement the management action unchanged.

Fig. 5
figure 5

Adaptive management as implemented after selection of a management action (using the same shading described in the subtitle for Fig. 4)

Engaging science in adaptive management

Science is engaged in identifying alternative management actions, selecting one or more actions from among them, and assessing the performance of the implemented management agenda. Within the framework set forth in Figs. 4 and 5, there are five essential points of engagement where science guides adaptive management.

Developing conceptual models

After an agency has engaged stakeholders in the process of conducting a needs assessment and developing goals and objectives, and it has gathered, critically assessed, and synthesized available scientific information, it must specify conceptual models that it will use to inform the development and analysis of alternative management actions (EPA 2003). Conceptual models are essential in informing all conservation-planning efforts, development of assessment and monitoring programs, and design of research agendas; as well, they serve as a fundamental step in the process of implementing an effective adaptive-management program. Conceptual models document the human perception of how ecological systems function by describing in graphical or narrative form the structure of the ecosystem and linkages between species and other biotic and physical elements in the system (DiGennaro et al. 2012). They convey response variables, covariates, and the relationships among them. To assure that a conceptual model contributes to the identification of the environmental factors that need to be targeted by resource managers, it must be structured to incorporate explicitly the environmental factors that are affected by ongoing resource management, and it should describe how management actions manifest as impacts on target species and their habitats. Conceptual models serve as the blueprints for the development of operational models; hence their structure should anticipate quantification, wherein the conceptual model is parameterized to facilitate a modeling process that is used to predict the effects of management-action scenarios on targeted resources.

Confronting management prescriptions with available data

Adaptive management is an effective way to fine tune a management action that has a priori been recognized as an effective means of mitigating harm to targeted resources, which may include species, their habitats, and ecological processes that affect them. Unless adaptive management incorporates the structured process set forth in Fig. 4 above, it is not a competent or an efficient way to identify appropriate management actions from among alternatives, or to validate an action that lacks empirical support (Doremus et al. 2011). Because ecological systems are incompletely understood, all management actions are accompanied by uncertainties regarding their probable outcomes. Accordingly, proposed management actions that are intended to be implemented in an adaptive framework must be confronted with available data in sequential hypothesis-testing exercises or weight-of-evidence exercises to establish by inference their likely benefit to the resources targeted by the actions. Hypotheses are structured, for example, to differentiate between environmental stressors that appear to be causative agents affecting the status and trends of target species, and those that may simply be correlated with demographic changes. Hypotheses are designed to rigorously consider hierarchies of environmental stressor effects, mechanistic pathways linking management actions and expected environmental outcomes, variable specification, and spatial and temporal aspects of the costs and benefits of alternative actions. A management action that is not falsified through a hypothesis-testing process (that is, an action that is “supported” by available data) can be considered to be a reasonable candidate management action for implementation in the adaptive framework portrayed in Fig. 5.

Building quantitative models

Conceptual models serve as the template for the development of quantitative models that, in turn, draw directly on empirical research to support or refute the relationships posited in the underlying conceptual model(s). The quantification process allows for population viability analysis, or some demographic-modeling equivalent, to be used to assess the effects of alternative operational regimes and mitigation-activity scenarios on target species. The construction of quantitative models requires formulation of unambiguous physical and ecological relationships that describe mathematically the interaction between model components. The quantitative model should be a computational manifestation of the conceptual model; however, some evolution in modeling directives and model form will likely occur as new data and a better understanding of the system become available, as data limitations are realized, or as species objectives are refined or revised during the implementation of management actions or environmental restoration efforts. It is essential to employ a group of expert scientists to carry out this essential modeling activity, while it may be advisable for an independent second expert group to in turn verify and validate the model(s) to confirm that it produces results that are consistent with the current understanding of the affected ecosystems and species. That process step is necessary to assure that quantitative models exhibit behavior that is consistent with that intended by those who constructed the model and, using sensitivity analyses, to identify the variables (or parameters) that have a potentially significant impact on model outputs.

Designing monitoring schemes

Monitoring is reasonably described as environmental surveillance; it is a form of applied research (or research with a clearly articulated application), which is approached much as a laboratory experiment is approached—with a rigorous design and application of the scientific method (Block et al. 2001, Williams et al. 2002). A monitoring scheme must have explicit programmatic goals and objectives, direct the gathering of data in a framework adequate to detect meaningful changes in the conditions of ecological resources, and develop reliable, scientifically defensible indicators for measuring change (see for examples Lyons et al. 2008; Nichols and Williams 2006; McDonald-Madden et al. 2010, 2011; Wintle et al. 2010). Development of a monitoring scheme must include identification and characterization of the full complement of environmental attributes, including the water quality, physical landscape, and biotic factors that are believed to affect the status and population trends of target species, the extent and quality of their habitats, and the pertinent ecological processes that directly and indirectly affect both. Direct measures and environmental-condition indicators that are efficient at detecting effects on target species and their essential resources (in other words, validated prior to use) must be identified. In addition, it is necessary to establish detection limits for the variables to be measured and condition indicators that are employed, and contingent decision values must be identified (thresholds or trigger points) for those indicators (Noon 2003).

While effectiveness monitoring might seem to be the foundational characteristic of an adaptive-management program, Walters (2007) observed that from among more than 100 case-study attempts to implement adaptive management, most failed to meet the criterion of an experimental management program, whereas others suffered from serious shortcomings in the design and implementation of their monitoring programs. Most recently, Westgate et al. (2013) reviewed 61 publications describing programmatic adaptive-management efforts, but just 13 were supported by published monitoring data accrued through the project. It surely needs to be recognized that logistical and resource constraints will frequently limit opportunities for rigorous monitoring in quasi-experimental designs.

Interpreting returns from monitoring

Adaptive management proceeds, not just by adjusting management actions and protocols to make them more effective and efficient over time but also by drawing lessons from incoming data and contemporary analysis to adjust its monitoring design and implementation as necessary. Real-time adjustments to data collection must consider (or reconsider) whether a monitoring scheme’s limits in time and boundaries in space are appropriately captured in the allocation of sample locations and temporal sampling frame (including the time intervals between samples). To confirm their value to the monitoring effort, indicators need to be re-evaluated periodically to address the precision with which they can reject null management hypotheses or discriminate among competing hypotheses, and, where applicable, to confirm that surrogates are sufficiently congruent with the targets they are being used to track. Monitoring schemes must be queried on an ongoing basis to establish whether spatial heterogeneity has been appropriately stratified within sampling designs, units of measure for indicators are effective, and trade-offs between marginal gains in precision and statistical power are addressed.

Conclusion

Adaptive management has a mixed track record at best (Allen and Gunderson 2011). This led one group of practitioners to opine that adaptive management is “relatively little practiced and much misunderstood” (Allen et al. 2011). Two conclusions can be drawn from the record—either the concept itself is flawed and should be abandoned, or the concept is sound, but there is typically a failure to implement it properly. In our view, the latter is true. The remedy, that is, the requisite approach to adaptive management, demands developing generally agreed-upon conceptual models, confronting candidate management actions with best available data to establish reliable conservation options, and then choosing management actions for implementation from among well-informed scenarios using contemporary modeling techniques. It requires consideration of effects sizes in assessing changes in an indicator to exposure to environmental agents of change and management treatments. All of that is required before management commences and the challenges associated with adapting that management using guidance gleaned from a well-designed monitoring scheme are addressed. Adaptive management requires a demanding upfront approach that emphasizes the production, critical assessment, and appropriate interpretation of scientific information throughout the adaptive-management process.

Across the nation, policy makers and resource managers must come to grip with their mixed track record of implementing adaptive management. A fully articulated framework for integrating science into resource management and policy is an imperative, made immediate by the increasing frequency with which resource management agencies are defaulting to adaptive techniques to support management actions that are implemented in the face of uncertainty. Adaptive management approached with new rigor begins with the recognition that shorthand conceptions of intellectually demanding ideas are unlikely to advance materially durable and efficacious approaches to large-scale, complex natural resource management challenges. Rather, structured adaptive management offers a most-promising framework for integrating guidance from science, in the overlapping forms of research, monitoring, and modeling, into resource management and policy.