1 Introduction

Over 27 GW of electricity-generating capacity was added to the U.S. grid in 2016, resulting in a net capacity increase of approximately 15 GW [1]. Continuing observed trends over the past decade, the additions were dominated by renewable and natural gas-powered generators while coal units were retired. Solar installations had their strongest year on record: 7.7 GW of utility-scale solar and 3.4 GW of end-use capacity were added in 2016. That year also saw natural gas displace coal as the fuel responsible for the largest share of electricity production [2]. To incentive renewable growth, utility-scale wind and solar project developers were often awarded long-term power purchase agreements (PPA), enabling returns on capital cost investments [3]. Distributed renewable generation was incentivized by so-called net metering programs, which enabled end-use customers, acting as a producer, to sell their excess electricity back to the grid at prices that did not vary by time slice, allowing the individual sellers stable returns [4]. While such programs are now under review and subject to curtailment, they have coincided with renewable energy mandates, rapidly falling prices, and low-cost natural gas supplies to allow impactful renewable electricity generation growth over the past decade [5]. Such developments are among many that have influenced coal-fired units. Quantifying the magnitude and patterns of coal generator changes, as well as a discussion on how to effectively model them, are the subjects of this generation analysis.

The impacts from increasing levels of renewable energy penetration on grid operations will be magnified in coming decades [6]. Due to the intermittent nature of wind and solar power, large amounts of capacity can be added to the grid but thermal generators must be on standby to provide electricity during peak periods and during unfavorable weather conditions [7].

Projected trends in energy generation provide challenges to coal-fired power stations, the majority of which were designed to generate power under baseload operations [8]. From sector-wide studies to plant-specific inspections, a variety of impact analyses have been conducted, yet considerable ambiguity remains. This is largely due to two realities: individual plants cannot accurately be grouped into general classifications and the response of plant operators to load fluctuations is not entirely predictable nor able to be standardized. Recent studies have demonstrated that accounting for costs associated with cycling-related damage or damage mitigation retrofits impact the cost-effective mix of generation technologies [9].

This study uses hourly generation data from the U.S. Environmental Protection Agency’s (EPA) Air Markets Program [10] to determine how operations have changed at 48 large coal-fired units from 2008 through 2016, if these changes have affected reliability, the impact of renewable generation on coal plant output variability, and how the developments translate into costs. Before the data analysis is presented, damages associated with increased levels of operational variability are discussed to provide economic context to the findings. Conclusions and potential next steps in modeling are then presented to assist grid operators, plant owners, and policymakers in making economically optimal decisions as renewable energy continues to grow and operate simultaneously with a large fleet of coal-fired generation units. Even though several corporations have completed internal reviews of their coal plant operations, an overview of the evolving run patterns at large coal power stations has not been introduced into academic literature. This paper provides insight into the diverse operational paths that plants have followed since the drop in natural gas prices ten years ago, pointing out that some trends, such as lower capacity factors, have been near-universal while others, such as increased outages, have been experienced by a smaller subset of the sample.

2 Background and literature review

Studies that predict future impacts on coal plant operations due to intermittent generation growth must make a series of simplifying assumptions in order to devise models that have reasonable structural complexity and solution times. Since there are approximately 500 coal-fired electricity generating units comprising around 300 GW of total capacity [11], individual plant run decisions cannot be represented in a model that also portrays technology build decisions and transmission infrastructure investment options. In August 2016, the National Renewable Energy Laboratory released the Eastern Renewable Generation Integration Study, which followed earlier reports on Western U.S. renewable expansion. Unlike the West, eastern regions hold the balance of U.S. coal capacity that will be directly impacted by renewable energy growth [12]. The study encompasses seven U.S. RTO/ISO regions as well as Eastern and Central Canadian operators. It introduces predetermined levels of renewable energy generation with corresponding predetermined retirements and quantifies the impacts on existing thermal generation units. Transmission infrastructure upgrades, which can smooth renewable resource volatility by dispersing aggregate supply, are varied across scenarios. Unsurprisingly, as the wind and solar grid presence grows, the average number of starts for thermal units increases while the average capacity factor decreases [12]. Since natural gas combined cycle units are better equipped for more frequent ramping, the report estimates that an increased renewable presence will affect capacity factors of combined cycle units more aggressively than coal units, but still predicts a 30% decrease in the average coal unit capacity factor under a 30% increase in intermittent renewable generation. And while coal shifts from providing baseload generation to becoming a load-following technology, the report notes that further analysis is necessary to measure the financial impacts of the operational changes.

Other studies have shown that intermittent, load-following operations at coal-fired power plants lead to increases in the equivalent forced outage rate (EFOR), which is the percentage of hours of failure due to unplanned outages and derated hours relative to the hours of availability of the same unit [13]. Traditionally, most damage to coal plants was classified as creep fatigue; sustained operations would gradually wear systems into disrepair [14]. Creep-resistant steel was utilized to mitigate this problem, however such steel is vulnerable to the temperature gradients from variable operations [14]. Reports have noted that steel alloys mixed with 9% chromium, so-called T91/P91 steels, while effective for baseload operations, have not been adequately tested in coal plants that are load following and may be subject to unforeseen damage [15]. Moreover, any departure from baseload operations can manifest itself in diverse ways. For example, the plant may run under part-loading conditions, where output is stable but remains at a fraction of the full capacity potential. Two-shifting operations occur when the plant becomes a load-following generator, stepping in to meet intraday demand by ramping up and down quickly and increasing overall volatility while stabilizing grid operations [15]. Finally, the plant may run on a seasonal schedule, increasing the number of annual cold, warm, and hot starts. During cold starts, the plant begins operations with equipment at ambient temperatures and achieves normal operating temperatures over the course of 12 to 15 hours, compared to less than five hours for a hot start [15]. Even though there is a larger temperature gradient involved in cold starts, more rapid temperature swings associated with warm starts can be most harmful to the plant’s equipment [16]. Plants may rotate among these operational schemes, increasing the aggregate damage on multiple fronts.

Extensive research has been conducted on the damages associated with increased cycling and the potential remediation measures that exist [15, 17]. Both the Electric Power Research Institute and Intertek APTECH, two companies with expertise in coal-fired power station equipment, have similar estimates for when the cumulative cycling damage will cause a forced outage: seven months to two years in older systems and seven years in new systems [18]. Such estimates would provide support to the assertion that outages caused by increases in volatility, which began affecting the U.S. coal fleet at an increased rate starting in 2009, could be seen within this study period window that concludes at the end of 2016.

Two-shifting plant operations cause a myriad of equipment damage that is mainly attributable to rapid thermal gradients. A plant undergoing a cold start has equipment that expands as it heats up from the ambient temperature to boiler operating conditions. The most common manifestation of such stress happens when boiler tubes fail [15]. Estimates show that a plant experiencing 50 or more starts per year can expect a four-fold increase in tube failures [19]. However, boiler tubes are not the only equipment subject to thermal stress: nearly all equipment will need more frequent replacements from the expansion and contraction associated with variable temperatures. Welded connections, including boiler structural attachment joints, are also vulnerable in coal plants that have not been designed for two-shifting. Aggravating the creep and thermal fatigue is mechanical fatigue from vibrations produced during turbine run-up which especially impacts turbine blades. Corrosion fatigue from high oxygen levels during startup and cooling water leakage during shutdown periods also necessitates equipment replacement. This damage, seen in the evaporator sections of the boiler, can be exacerbated as the plant’s water treatment system is shut down when boiler temperatures drop to near ambient temperatures, causing water chemistry issues. Once-through boilers are especially susceptible to contaminant-induced corrosion failure [15].

Heat containment practices, such as installing improved insulation, help ensure that temperatures fluctuate at controlled, even rates in addition to shortening start-up times. Better insulation prevents condensation during start-ups as warm steam is exposed to cooler temperatures. Off-load circulating systems that balance temperature variations have been shown to be effective as have increased drainage capacity that promotes uniform flow [15]. Well-documented equipment lifespan monitoring and information exchanges with other plant operators can facilitate better knowledge of operational impacts. Models can be developed based on equipment response to temperature and pressure conditions, ensuring that preventative maintenance outages substitute for unplanned shutdowns during periods of high demand. Since start-up systems are used more frequently under load-following conditions, their reliability, degrees of flexibility, and speed increase in importance. Plant operators have a variety of equipment and operational improvements at their disposal to mitigate material failure rates under variable load conditions, however uncertainties remain concerning the effectiveness of these measures, especially since plant equipment and operations vary so greatly.

Due to the plant-specific impacts of variable operating procedures, a wide range of estimates for the costs of cold, warm, and hot starts is appropriate. The 2012 Intertek APTECH report provided a 25th to 75th percentile estimate for these three starts, which are as follows (inflated to $2016): 67–132 $/MW of capacity for each cold start, 59–83 $/MW for each warm start, and 42–73 $/MW for each hot start. The numbers incorporate additional operating and maintenance costs, capital costs of equipment replacement, increases in the effective forced outage rate, fuel and water costs of start-up systems, and plant efficiency declines. They do not include the value of generation that is forgone under an outage nor the costs of obtaining substitute reserve capacity, if applicable. Intertek APTECH also estimates the costs of damages from “significant” two-shifting, defining these load-following operational changes as lying within a MW output range no greater than 32% of gross dependable capacity. The rate of these ramps in generation is based on a historical observation period in which coal generators were not subject to rapid shifting. The cost estimate is given at 3.56 $/MW of capacity ($2016), with the 25th to 75th percentile range equaling $2.04 to $4.10 per MW. Notable is the fact that considerable cost uncertainty surrounds faster ramp rates, with a damage multiplier estimate ranging from 1.5 to 10 for larger subcritical coal plants (Kumar 2012). For a 500 MW plant, this means that a single rapid ramp-up would likely cost anywhere from $1530 to $20500 in associated equipment damage, excluding facilities with costs outside of the 25th to 75th percentile range.

3 Methodology

Hourly operational data from 48 large coal-fired units was taken from the 50 largest carbon dioxide emitters in 2008 based on reported emissions to EPA, producing an initial pool of large baseload coal generators. Two plants were excluded for data quality issues; all remained in operation through 2016. The 2008–2016 hourly generation data originated from EPA’s Air Markets Program and consisted of 78168 hourly records of each plant’s MW output from 00:00 on February 1, 2008 through 23:00 on December 31, 2016 (hourly data prior to February 1, 2008 was not available; data calculations have taken into consideration that the 2008 period of record was 8040 hours instead of the 8784 hours comprising a complete leap-year). Annual trends in the operational output were analyzed using the statistical analysis software package R in order to determine the type and magnitude of change over the past nine years. When a trend or observation is labeled statistically significant in this analysis, it carries a p-value of 0.10 or less, meaning there is at most a 10-percent chance that the observed trend is due to random data variability and cannot be attributed to a causal relationship.

4 Results and discussion

Figure 1 shows how the capacity factors for units have changed over the study period, with 38 of 48 units showing statistically significant annual declines that range from 3% to 8%. No units showed significant positive trends. Impacts become evident when 2016 average MW generation output is compared to 2008 output in Fig. 2. In the direct annual comparison, hours when the sample unit is not generating electricity or is generating at levels below 10% of its maximum output for that year, which occurs as the plant is initializing a start-up, finishing a complete shutdown, or performing post start-up testing, are excluded. This omission allows a direct comparison of operations while the plant is supplying electricity to the grid rather than being influenced by the number and length of outages, both of which are analyzed later. All units in the sample ran at a lower hourly output rate in 2016 relative to 2008, ranging between a 394 and 48 MW decrease.

Fig. 1
figure 1

Annual trends in the percentage of time that a unit ran at a capacity factor of 75% or greater

Fig. 2
figure 2

Annual average hourly plant output when plant is generating electricity at greater than a 10% capacity factor

Analyses were then conducted to determine how the lower capacity factors were reflected in operational decisions. Figure 3 demonstrates, with a few exceptions, that the lower average annual output coincides with increasing variability found in two-shifting operations in the unit rather than a consistently stable, lower output from running under part-loading conditions. The figure plots annual trends in average hourly output variability during periods of electricity production, which is defined as the sum of the absolute values of each hourly MW output change over the course of a calendar year divided by the number of electricity-generating hours in that year.

Fig. 3
figure 3

Annual trends in the average variability while the unit is generating electricity

Average hourly output variability:

$$\frac{{\mathop \sum \nolimits_{k = 1}^{n - 1} \left| {\theta_{k - } \theta_{k + 1} } \right|}}{n}$$
(1)

where \(\theta_{k}\) is the MW output at hour \(k\) and \(\left\{ {1,2, \ldots ,k, \ldots ,n} \right\}\) is the set of all hours with non-zero generation.

Statistically significant trends are found in 38 out of 48 units, with 37 of the 38 statistically significant trends showing an increase in average volatility.

Another measure of generation output variability, standard deviation, is calculated for 2008 and 2016 operational data when the plant is running at levels above a 10% capacity factor. Periods when the plant is not producing electricity or is generating at very low levels (levels much below typical online output) are not included in the standard deviation calculations for each year, as outages would raise the degree of sample dispersion. For example, a plant that runs consistently at a high capacity factor when online but subject to frequent shutdowns would have a large standard deviation (both clusters would be far from the sample mean). By only including hourly data observations when the plant is supplying electricity to the grid, the standard deviation reflects whether or not the unit is frequently ramping or if such cycles are large in magnitude. The comparison provided in Fig. 4 shows that 47 of 48 units had a larger annual standard deviation in 2016 output relative to 2008.

Fig. 4
figure 4

2016 change in annual standard deviation relative to 2008 when the plant is generating electricity

Even though the above analysis has shown increasing variability when the plant is running, if the lower capacity factors are partly attributable to data from the when the plant is not running, some fatigue damage could be mitigated and limited to certain parts of the year. Figure 5 verifies that while a portion of the total annual variability is diluted by more frequent, longer outages, their impact does not outweigh increased ramping during operational periods: 28 of the 48 sample units had statistically significant increases in variability when all hours in the year are considered.

Fig. 5
figure 5

Annual trends in total annual variability in 2008–2016

Examining the duration, number, and seasonality of plant outages can provide insight into equipment maintenance and replacement cycles. Even though the units are located in geographically diverse locations across the continental U.S., their power grids, on average, experience higher electricity demand during the winter and summer months than during periods of temperate spring and autumn weather; it is therefore unlikely that a plant would be scheduled for routine maintenance during a high-demand season. Trends in the total number of hours that a unit is not generating any electricity output is shown in Fig. 6. All of the statistically significant trends are positive, indicating the hours spent offline are growing at the affected units. Statistically significant trends in the number of annual outages, irrespective of outage length, are more mixed: 13 of 48 units had statistically significant trends, however only eight of the trends were positive, meaning five units had a declining number of outages over the 2008–2016 study period. During the winter and summer high-demand periods, five units in the sample showed a significant trend in the number of outages; all five showed increasing outage trends. Since no units in the study showed significant declines, it appears likely that certain plants are now experiencing an increase in their EFOR, but these increases are far from universal. While the time period analyzed is long enough that changes to baseload operations early in the period would necessitate maintenance at the end of period, it is also possible that some maintenance is being deferred and will be either undertaken in coming years or simply act to hasten the unit’s planned retirement. Plant operators may also be aware of the impact that operational changes have on equipment lifespans and may more aggressively schedule maintenance during planned outages.

Fig. 6
figure 6

Annual trends in the number of hours each year that a unit was offline in 2008–2016

Finally, the impact of intermittent renewable energy capacity additions on coal plant operations is examined. Renewable generation data was obtained, by NEMS region [9], from 2008 through 2016 and each coal unit analyzed was assigned to its respective region. End-use and utility-scale wind and solar generation was considered. Baseload renewable generation, from hydroelectric, geothermal, and biomass plants, was excluded, although the growth of these technologies over the period has been very low. Based on the 48 coal-fired units analyzed, a causal relationship between intermittent regional renewable energy growth and the change in variation in coal unit output cannot be determined (shown in Fig. 7). Within each region, there is a wide range of operational change that appears highly unit-specific rather than driven by a uniform grid response. As wind and solar installations continue to increase, a direct correlation with coal-unit variability may become apparent, but none currently exists. The same analysis was conducted for the 2009–2016 period, which excludes the period before the decline in natural gas price but includes most intermittent renewable generation growth. In this timeframe, no universal, cross-cutting correlation between coal unit generation volatility and renewable generation was evident. While this study looks at generation rather than plant retirements, the U.S. Department of Energy’s report on reliability and electricity markets showed no regional correlation between coal and nuclear plant retirements and 2016 share of renewable energy generation [20].

Fig. 7
figure 7

2016 average change in output per hour of generation relative to 2008, measured with renewable generation growth in the host region

5 Conclusions and policy implications

Most of the coal-fired units considered in this analysis experienced a shift from baseload operations to more frequent ramping, becoming load-following electricity suppliers. The change coincides with evolving economics in the energy sector: plants that had provided baseload power are no longer the most inexpensive to run. Mid and long-term energy outlooks anticipate these factors continuing to accelerate over the coming decades, requiring coal-fired units that remain on the grid to become more nimble to supply electricity during high-demand periods and when intermittent generators experience weather-related outages. Uncertainties remain in the degree of future volatility: strong renewable energy growth coinciding with higher natural gas prices would cause a larger magnitude of coal-fired operational output variability while decreases in energy storage costs would result in lower variability and decreased coal generation. Although running patterns of specific coal-fired units have changed dramatically, overall impacts cannot be reduced to universal trends across all units. This suggests generation output is partially a function of plant operators understanding how output will affect equipment performance and lifespan. Since coal plant retirements are expected to accelerate in coming years, increasing volatility at certain units should not necessarily be equated to plant tolerance levels. If a unit is close to retirement, long-term equipment damage, while present, may be allowed if decommission is imminent.

The current sample only encompasses large units. Operational changes affecting smaller coal-fired units over the same nine-year period may be vastly different. Expanding the range of unit nameplate capacities will capture a greater share of coal generation and provide more clarity on sector-wide impacts. Yet, even with the homogenous sample in this study, the findings suggest that top-down modeling approaches which generalize equipment response and damage patterns may not accurately characterize how plants are being affected or expected future results because they do not capture plant-specific nuances. Cataloguing unit-specific costs associated with differing ramp rates and then combining them with regional renewable and gas generation forecasts can allow operators and policy makers to determine the costs and benefits of different output patterns by developing optimization models with generation constraints. A supplier operating in a service area with large projected increases in intermittent generation can understand how running a plant at stable output levels, generating excess electricity at low demand periods, and foregoing sales at high demand periods compares to load following operations that maximize sales revenue, minimize fuel costs, and incur high degrees of equipment damage. Modeling will also show how upfront plant upgrades, such as those that reduce thermal stress, can be cost effective if the time horizon for anticipated usage is sufficient. This analysis has demonstrated the need for future activities to accurately depict how thermal plants will operate in the coming years under greater impacts from intermittent renewable-generating sources.