Respiratory failure

Respiratory monitoring

Measurement of lung volume has always been a concern in patients receiving mechanical ventilation (MV), and complex methods have been proposed for clinical investigation. Patroniti et al. [1] described a simplified helium dilution technique to measure end-expiratory lung volume and compared it to computed tomography (CT) in 21 MV patients. The authors specifically studied the accuracy and precision of the method. A simple rebreathing gas was used to deliver at least ten usual tidal volumes. The agreement between the two methods was found very acceptable for clinical purposes. It was noted, however, that the higher the amount of hyperinflated tissue, the greater was the underestimation of lung volume by the helium dilution method. It has been well demonstrated that a frequent cause of repeated lung volume loss is endotracheal suctioning. This can induce derecruitment in patients with acute respiratory distress syndrome (ARDS). The effects of such maneuver were tested in ten patients with only mild to moderate lung failure by Fernandez et al. [2]. Three techniques were compared with or without preoxygenation. The authors found that reduction in lung volume during suctioning was similar with the quasiclosed and closed systems but significantly higher with the open system. They also observed that in these patients without severe lung disease these changes were transient and rapidly reversible within 10 min.

Alveolar consolidation is best diagnosed by CT. Lichtenstein et al. [3], continuing their assessment of the usefulness of lung ultrasound examination in the ICU, assessed its value in 65 patients in whom CT had confirmed alveolar consolidation. Only 6 were not diagnosed by ultrasound; conversely, ultrasound was positive in only one of 52 control patients without alveolar consolidation on CT. At least in the author’s hands, this technique seems to constitute a reliable tool for this diagnosis.

Measurement of respiratory mechanics usually describes the respiratory system in terms of elastance, compliance and time constant. In an elegant study, Kondili et al. [4] divided tidal expiration in different phases based on the analysis of expiratory flow-volume curves in ten patients with acute exacerbation of chronic obstructive pulmonary disease (COPD). They showed that the end of expiration is characterized by a lengthening of time constants, and that the addition of external positive end-expiratory pressure decreases resistance at the end of expiration and shortens time constants, thus facilitating equilibration between the external pressure and the alveolar pressure. Although we already knew the effects of external positive end-expiratory pressure in such patients, this new method of exploration sheds new light on its mechanisms. A part of expiratory resistances can be caused by the endotracheal tube itself. In many cases its contribution is not huge. However, more and more studies suggest that over the course of MV the inner diameter of the tube may progressively decrease due to the permanent deposits of secretions. Using the acoustic reflectometry method Boqué et al. [5] prospectively assessed the inner volume reduction of 94 endotracheal tubes, used in 80 patients, and found this reduction to be extremely frequent. In almost one-fourth of the patients the real diameter of the tube was smaller than 7 mm. The clinical implications of such findings may be important, and further studies are needed on this topic.

Intra-abdominal hypertension may have important clinical consequences in terms of both respiratory function and intra-abdominal organs function Its prevalence, however, is not known. It is thus the great merit of this multicenter collaborative 1-day prevalence study by Malbrain et al. [6] in 13 ICUs of six countries to evaluate its frequency in a cohort of 97 patients. Based on a definition of abnormal intra-abdominal pressure of 12 mmHg or higher, its prevalence was 50%, while 8% of the patients had abdominal compartment syndrome with a pressure of 20 mmHg or higher. The only risk factor was the body mass index, while the effects of massive fluid resuscitation and renal and coagulation impairment were at the limit of statistical significance.

Last, intrahospital transport poses an important risk to ICU patients. Continuous monitoring as well as presence of qualified staff and well maintained equipment are probably essential to minimize incidents. The Australian Incident Monitoring Study in Intensive Care received 176 reports describing 191 incidents over a 6-year period [7]. They tried to identify all contributing factors, of which 46% were system-based and the others were human-based. In 31% of the incidents there were significant adverse outcomes. A number of factors were also identified as having prevented or limited harm. These problems are often underestimated or underreported and deserve great attention. An editorial by Shirley and Bion [8] accompanied this paper.

Acute respiratory distress syndrome

Epidemiological characteristics and outcomes from acute lung injury (ALI) vary across studies. This variability depends on definitions, subpopulations included in studies, comorbidities, and the severity of the disease per se. Brun-Buisson and coworkers [9] studied the current occurrence and causes of ALI and ARDS, the relationships and respective outcome of mild ALI (PaO22/FIO2 between 200 and 300 mmHg) and ARDS (PaO2/FIO2 equal to or below 200 mmHg), and the factors associated with survival. A 2-month inception cohort (February–March 1999) of 463 individuals with ALI among 6,522 patients who were admitted for at least 4 h in an intensive care unit (ICU) was scrutinized. Data pertain to ten European countries and 78 ICUs. Among 136 patients initially having mild ALI 74 (55%) went on to ARDS. There were 62 with mild ALI while 401 had ARDS. Crude ICU and hospital mortality rates were 22.6% and 32.7% (p<0.001) for mild ALI and 49.4% and 57.9% (p=0.0005) for ARDS. Initial mean tidal volume and positive end-expiratory pressure were 8.3±1.9 ml/kg and 7.7±3.6 cmH2O in ARDS patients. Air leaks were detected in 15.9% of subjects. In multivariate analysis mortality was associated with age, immunoincompetence, Simplified Acute Physiology Score (SAPS) II, logistic organ dysfunction score, and early air leak. The authors concluded that ALI is frequent, that there is a continuum between ALI and ARDS, and that there is a substantial difference in mortality, being much higher in ARDS. An editorial comment by Rubenfeld and Christie [10] accompanies this article, which also has an erratum [11].

A PaO22/FIO2 ratio below 200 mmHg is one of the diagnostic criteria of ARDS according to the American-European Consensus Conference, yet this ratio is affected by a number of factors such as ventilator settings and FIO2 per se. Ferguson and colleagues [12] analyzed the impact on enrollment in a trial of high-frequency oscillatory ventilation and the potential effects on study outcome when screened ARDS patients were placed on standard ventilator settings. These settings were pressure control ventilation to achieve a tidal volume 7–8 ml/kg predicted body weight ensuring peak inspiratory pressures below 35 cmH2O, FIO2 1 and positive end-expiratory pressure 10 cmH2O. Forty-one consecutive patients were included. After institution of standard settings, in 17 patients (41%) the PaO2/FIO2 was persistently below 200 mmHg and the remainder (24 patients, 59%) had a PaO2/FIO2 above 200 mmHg 30 min after the changes were implemented. The change in FIO2 was the main reason for these changes in PaO2/FIO2. The ICU mortality rate was significantly greater in those with persistent ARDS than in those with the transient form, 52.9% vs. 12.5% respectively (p=0.01). The authors concluded that their findings are important for trial design because of the observed differences in outcome, and proposed the use of standardized ventilator settings for patient enrollment.

China has seen enormous economic growth, yet the country’s demographic characteristics differ substantially from those in the Western world. The epidemiological characteristics of ARDS in China are largely unknown. Lu and coworkers [13] performed a 12-month survey (2001–2002) in 15 ICUs in 12 university hospitals in Shanghai. The aim was to investigate the incidence, causes, and outcome of ARDS in adult patients who were treated in the ICU for at least 24 h. A total of 5,320 admissions were registered during this period, and 108 patients (2%) were diagnosed as having an ARDS. The most common predisposing factors for ARDS were pneumonia (34%) and sepsis of nonpulmonary origin (31%). Twenty-seven patients were not intubated. In those who received invasive MV the most frequently used ventilatory mode was synchronized intermittent mandatory ventilation (32%). In hospital mortality rate for ARDS patients was 68.5%. The majority of ARDS patients who died (60%) did so because of multiple organ dysfunction, whereas 23% died because of refractory respiratory failure. The authors concluded that reassessment of respiratory and intensive care management and implementation of effective therapeutic interventions are required.

Severe acute respiratory syndrome

Severe acute respiratory syndrome epidemics, whose causative agent is a coronavirus, carries a mortality rate of about 7–13% in young patients and about 60% in the elderly. Gomersall and coworkers [14] undertook a retrospective, observational cohort study of the first 54 patients who were admitted to the ICU of a Hong Kong university hospital because of respiratory failure. The aim was to describe the clinical course and outcome of these patients and to investigate factors associated with prognosis. Their median Acute Physiology and Chronic Health Evaluation (APACHE) II score was 11 (interquartile range 8–13). At 28 days 34 patients (63%) were alive and not undergoing MV, 6 (11.3%) were receiving MV, and 14 had died (25.9%). Seven of 27 ventilated patients (25.9%) developed barotrauma despite a low tidal volume (mean tidal volume 7.7±2.2 ml/kg predicted body weight) and low-pressure strategy (mean positive end-expiratory pressure 8.3±2.5 cmH2O, and peak airway pressures of 25.6±4.8 cmH2O in those who did not develop barotrauma and 26.5±3.1 cmH2O in those who did not develop it). Variables associated with poor outcome on univariate analysis were age, severity of illness, lymphocyte count, decreased steroid dose, positive fluid balance, chronic disease or immunosuppresion, and nosocomial sepsis. The authors concluded that mortality is high for this syndrome, it causes severe respiratory failure with little organ failure, and there is a high incidence of barotrauma in those requiring MV.

Extubation of the trachea

Endotracheal extubation is the final step in weaning from MV. A failed tracheal extubation entails a worse prognosis. Traditional weaning indices are poor predictors of extubation outcomes. Data suggest that patients with neurological diagnoses, lack of adequate cough, and frequent endotracheal suctioning are at increased risk of extubation failure. Salam and coworkers [15] objectively assessed the impact of neurological status, cough strength, and volume of endotracheal secretions on extubation outcomes. In 88 patients who had passed a spontaneous breathing trial they measured cough peak flow, endotracheal secretions and ability to complete four simple tasks (open eyes, follow with eyes, grasp hand, and stick out tongue). Fourteen patients (15.9%) failed the first extubation. Patients with a cough peak flow equal to or below 60 l/min were more likely to fail extubation than those with a cough peak flow higher than 60 l/min [risk ratio (RR) 4.8, 95% confidence interval (CI) 1.4–16.2]. Patients with secretions of more than 2.5 ml/h were more likely to fail than those with fewer secretions (RR 3, 95% CI 1–8.8). Patients unable to complete the tasks were more likely to fail than those who completed the commands (RR 4.3, 95% CI 1.8–10.4). The presence of any two of those risk factors had a sensitivity of 71% and specificity of 81% in predicting extubation failure. The authors concluded that these simple, inexpensive, and reproducible methods provide a useful clinical approach to guide the extubation process. An editorial by Epstein [16] comments further on this article.

Periextubation pain has received little attention in ICU patients. Acute pain arouses clinical manifestations related to sympathetic system activation, and can lead to deleterious cardiovascular effects. Gacouin and coworkers [17] assessed the intensity of pain at extubation time using a visual analogue scale in 203 of a total of 332 extubated patients for a period of 1 year. Pain was significantly associated with a SAPS II above 36 (p=0.03), duration of MV of 6 days or more (p=0.002) and intubation not performed in the operating room (p=0.001). Severe pain was reported by 45% of patients. Pain resolved within 1 h after extubation in the majority of patients. Duration of MV for 6 days or longer was the only independent risk factor for pain of at least moderate intensity [odds ratio (OR) 2.4, 95% CI 1.03–5.4, p=0.04]. The authors concluded that periextubation pain is frequent and should be considered for treatment.

Unplanned, deliberate self-extubation of the trachea may affect patients’ outcomes and clinical resources. Moons and coworkers [18] developed a risk assessment tool to categorize patients at risk of deliberate tracheal self-extubation. Patients admitted in seven ICUs of a large referral tertiary center and who had been intubated for more than 12 h were followed for 3 months. In this period 26 cases of unplanned extubations occurred. Clinical and demographic characteristics were compared to those of 48 randomly selected control patients. Incidence of unplanned extubations was 4.2% (density incidence 0.68 per 100 ventilation days). Incidence was lower in surgical ICUs (2.6%) compared with medical ICUs (9.5%). Fifteen cases (57.7%) required reintubation. Multiple logistic regression indicated that patients with a low sedation level and higher degree of consciousness were at higher risk for deliberate self-extubation. The authors concluded that appropriate reduction in sedation when patients are weaned, a timely extubation, and increased surveillance when high risk is recognized may reduce the number of unplanned extubations.

Noninvasive mechanical ventilation

The recent International Consensus Conference in Intensive Care Medicine on the use of NIV in acute respiratory failure [19] clearly stated that, “The addition of NPPV to standard medical treatment of patients with acute respiratory failure may prevent ETI, and reduce the rate of complications and mortality in patients with hypercapnic “pump” failure.” Concerning episodes of “lung failure” the same document concluded that “the use of NIV may be also an appropriate treatment in selected patient populations with acute “lung” failure. Single studies have demonstrated noninvasive MV (NIV) to be an adequate alternative to conventional ventilatory support or better therapeutic strategy than standard therapy plus oxygen in such patients. More studies are required to confirm these findings.

In the year 2004 four studies were published in Intensive Care Medicine to improve understanding of and perhaps expand the indications for NIV and to highlight some methodological problems. Continuous positive airway pressure (CPAP) has been considered a very effective treatment of acute respiratory failure (ARF) due to cardiogenic pulmonary edema (CPE). L’Her et al. [20] compared the physiological and clinical effects of CPAP and those of standard medical therapy in a subset of very old patients (>75 years) in whom the application of any form of MV is sometimes denied. Within 1 h CPAP led to decreased respiratory rate and improved oxygenation compared to baseline, whereas no differences were observed in the medical treatment group. Seventeen patients of this latter group developed severe complications vs. only four of the CPAP group. Early 48-h mortality was significantly lower in the ventilated patients, but overall hospital mortality not. The authors concluded that CPAP promotes early clinical and physiological improvement in elderly patients during episodes of ARF due to CPE, without affecting overall mortality.

While NIV has been considered the first line treatment to prevent intubation in COPD patients during an episode of hypercapnic ARF, little is known about its effectiveness as “real” alternative to invasive MV when the criteria for emergency intubation are met. In a matched case control study performed in 128 COPD patients with very severe ARF (pH 7.18, PaCO2 104 mmHg) Squadrone et al. [21], evaluated the efficacy of noninvasive (case) vs. invasive MV (control). Mortality rate, duration of MV, and length of ICU and hospital stay did not differ between the two groups, but the NIV group had fewer complications and a tendency to be weaned earlier from ventilation. Intubation rate in the case group was 62%, but this subset of patients had similar outcomes as those of the control group. Those with successful NIV had lower mortality rate and shorter ICU and hospital stay than the patients who received intubation. In COPD patients with very advanced hypercapnic ARF NIV thus has a high rate of failure but nevertheless provides some advantages compared to invasive ventilation. Indeed a subgroup analysis suggested that the delay in intubation was not deleterious in the subset of patients who failed NIV.

Patients with severe chronic pulmonary diseases often suffer from coexistent pathologies and are also likely to develop extrapulmonary complications. Scala and coworkers [22] assessed the impact of those comorbidity on short- and long-term outcomes of NIV in hypercapnic ARF of COPD patients. They divided120 patients (pH 7.28, PaCO2 78 mmHg) into failure (n=22) or success group (n=98) according to whether NIV avoided intubation. The prevalence of chronic and acute comorbidities was, respectively, 20% and 42%, most of the cases being cardiovascular. Both NIV failure and 6-month mortality were greater in patients with than in those without comorbidities. Multiple regression analysis predicted NIV failure by acute comorbidity and forced expiratory volume in 1 s, while death at 6 months was predicted by having more than a single acute comorbidity of noncardiovascular origin and worst pre-existing activities of daily living. The presence of comorbidities is common in COPD patients requiring NIV, and their presence influences the outcomes of the patients.

Technical aspects of NIV include not only the types of interfaces, ventilators, and ventilatory modes employed but also some “marginal factors” that may interfere with patient well-being. It is now clear, for example, that loud sounds can contribute to patient discomfort during the ICU stay. Cavaliere et al. [23] studied the noise intensity in ten healthy volunteers undergoing NIV with two different levels of pressure support (10 and 15 cmH2O), using the helmet with and without heat and moisture exchanger filter, full face mask, and a nasal mask. Inside the helmet the noise intensity exceeded 100 dB, which was significantly higher than that during facial and nasal ventilation (70 dB). Noise intensity was not affected by the level of pressure applied or by the presence of filters. The level of discomfort was similar using the four different settings. The authors concluded that NIV helmet is associated with significantly greater noise than nasal and facial masks, but that it is equally comfortable in the short-term setting.

Weaning and postextubation failure

Weaning difficulties occur in a relatively small percentage of ventilated patients; however, patients undergoing prolonged MV are more prone to develop complications and therefore dramatically increase the costs of care. Great attention should be paid to early identification of patients who are likely to fail weaning and to predicting those who may develop postextubation respiratory failure. Three studies assessed the causes of weaning failures or the effectiveness of physiological indices in predicting weaning or postextubation failure.

Paresis acquired during the ICU stay (ICUAP) is recognized as a major event that often occurs during the management of patients with prolonged critical illness. It has also been shown that the duration of MV is an independent predictor of ICUAP. De Jonghe et al. [24] studied a prospective cohort of 95 patients without preexisting neuromuscular disease and ventilated longer than 7 days to determine whether ICUAP is an independent risk factor of prolonged weaning after awakening. The presence of ICUAP was defined as an Medical Research Council score lower than 48. Patients who developed ICUAP (24/95, 25%), had a significantly longer weaning time (6 days vs. 3 days). In multivariate analysis the two independent predictors of prolonged weaning were ICUAP and the presence of COPD. The authors concluded that ICUAP was the strongest independent predictor of prolonged MV, and that the prevention of ICUAP should result in shorter weaning time.

Only few of the studies that have investigated the utility and accuracy of some weaning indices were blinded. Conti and coworkers [25] evaluated the most popular weaning indices in a blinded fashion (i.e., the physicians making decisions about the weaning were always unaware of the predictive values). The study had two steps: patients’ data were first used to select the cutoff values for weaning predictors (i.e., minimal false classification); these values were prospectively validated in a cohort of 52 other patients. The variables recorded during the first 2 min after discontinuation of ventilation were: vital capacity, tidal volume, pressure in the first 100 ms of an occluded inspiration (P0.1), minute ventilation, respiratory rate, maximal inspiratory pressure (MIP), rapid shallow breathing index (f/VT), P0.1/MIP, and P0.1 ×f/VT. The receiving operating characteristic curve showed that the tests had no ability in discriminating between success and failure. The authors concluded that the most common evaluated indices are poor predictors of weaning outcome.

The breathing pattern in normal persons displays a certain variability, which is maintained by a central neural mechanism and the feedback loops of arterial chemoreceptors and lung vagal sensory receptors. Deviations in breathing pattern variability (BPV) from the normal level have been found in individuals under pathological conditions. Bien et al. [26] investigated whether potential changes in BPV predict weaning from MV in postoperative patients recovering from systemic inflammatory response syndrome (SIRS). The analysis employed to assess the BPV during a 30-min period of pressure support ventilation at 5 cmH2O, was the Pointcare plot, which is a scattergram that dynamically analyzes breathing pattern on real-time, breath-to-breath basis. The coefficient of variation and 2 SD, indicators of the dispersion of data points in the plot, were significantly lower in the patients who failed the weaning attempt than in those who did not. A low BPV is associated with a high incidence of weaning failure, and this variability may potentially serve as a weaning predictor.

P0.1 has been proposed as a predictor of weaning parameter, despite results that have been somewhat controversial. Fernandez and coworkers [27] evaluated the clinical value of this index, recorded by a software directly incorporated in commercially available ventilators in detecting extubation failure. The threshold values of the indices that best discriminated between successful weaning and extubation failure were determined in one-half of the patients while the accuracy of each index was then assessed in the other one-half. Breathing pattern and P0.1 were recorded during a 30 min spontaneous breathing trial. Of 114 patients who tolerated the trial and were extubated 21 required reintubation within 48 h. The area under the receiving operating characteristic curve for P0.1, P0.1 ×f/VT, and f/VT for diagnosing extubation failure was rather low and not statistically significant; P0.1 ×f/VT higher than 100 detected extubation failure with a low specificity. The authors concluded that bedside P0.1 and derived indices are of little help, if any, in predicting extubation failure.

Sepsis, septic shock, infection

Catheter-related infection and endocarditis

Although catheter-related infection (CRI) rates have decreased due to improved infection control techniques and devices, CRI remains the major cause of nosocomial bacteremia in ICUs. Preventive strategies and new devices opposing colonization continue to be tested. Four randomized controlled trials (RCT) on this topic have been published in Intensive Care Medicine this year.

Langgartner et al. [28] examined 140 central venous catheters (CVCs) and investigated whether skin disinfection during CVC insertion with an alcohol plus chlorhexidine (0.5%) solution followed by povidone-iodine provides greater protection against CVC colonization and infection than either one of the antiseptics alone. They found that catheter colonization rates were reduced fivefold (from 30% with povidone-iodine alone to 4.7%) with the successive application of both antiseptics. Molecular typing of organisms confirmed that most CVC colonization originated from the skin insertion site. The sample size of this study was, however, too small to confirm a reduction in infection rates.

In another RCT including 260 CVCs Carrasco et al. [29] compared heparin-coated triple-lumen catheters to chlorhexidine-sulfadiazine coated ones, a comparison which had not been performed previously. Colonization was found in 29 of 132 heparin-coated catheters in 13 of 128 chlorhexidine and silver sulfadiazine coated catheters (p=0.03). The incidence of CVC-related bloodstream infections (BSIs) did not differ [3.24 for heparin-coated catheters vs. 2.6 per 1,000 catheter-days in chlorhexidine and silver sulfadiazine-coated catheters, RR 1.22, 95% CI 0.27–5.43, p=0.79]. The authors concluded that antiseptic coating of central venous catheters reduces the risk of catheter colonization.

Leon et al. [30] sought to confirm the results of previous studies by Raad et al. [31] demonstrating the preventive efficacy of antibiotic-impregnated catheters (minocycline plus rifampin) in a RCT comparing the former to nonimpregnated CVCs, including 465 patients. Although the colonization rate decreased (from 24 to 10.4 episodes/1,000 catheter-days, RR 0.43, 95% CI 0.26–0.73), neither the overall infection rate (8.6 to 5.7, RR 0.67, 95% CI 0.31–1.44) nor CVC-related BSI rate (5.9 to 3.1, RR 0.53, 95% CI 0.2–1.44) decreased significantly. The authors noted that antimicrobial-impregnated catheters were associated with a significant decrease in coagulase-negative Staphylococcus colonization (RR 0.24, 95% CI 0.13–0.45) but a significant increase in Candida spp. colonization (RR 5.84, 95% CI 1.31–26.1).

Finally, Brun-Buisson et al. [32] tested the new generation of antiseptic-coated catheters (with enhanced chlorhexidine-silver sulfadiazine coating on both the internal and external aspect of the CVC) vs. nonimpregnated CVCs in a RCT enrolling 367 patients. Significant colonization of the catheter occurred in 23 (13.1%) and 7 (3.7%) patients, respectively, in the noncoated and coated groups (11 vs. 3.6 per 1,000 catheter-days, p=0.01), and CVC-related BSIs occurred in 10 (5) and 4 (3) patients in the noncoated and coated groups, respectively (5.2 vs. 2 per 1,000 catheter days, p=0.10).

In all these trials antimicrobial-impregnated catheters were associated with a significant reduction in catheter colonization and a trend to reduction in infection episodes, but not of BSI. It is noteworthy that all these trials have relatively low definite rates of CVC-related BSI in the control groups. In this context it is difficult to demonstrate efficacy of antimicrobial-impregnated catheters unless the sample size of the study is adequately powered. The potential for increased risk of colonization of antibiotic-impregnated catheters with Candida spp. needs confirmatory evidence.

Safdar and Maki [33] provided additional evidence in support of the predominant pathogenesis of short-term CVC-related infection by analyzing the combined results of two trials (1,263 catheters) testing preventive strategies aimed at minimizing catheter colonization at the skin entry site. The pathogenesis of infection was confirmed by DNA typing of organisms. The overall CVC-related BSI rate was 5.9/1,000 catheter-days. In the pooled control groups of the two trials 25 CVC-related BSIs occurred (7.0 per 1,000 catheter-days), in 60% of which infections were extraluminally acquired, 12% intraluminally derived, and 28% indeterminate. In contrast, CVC-related BSIs in the treatment groups were most often intraluminally derived (60%, p=0.006). The authors concluded that most CVC-related BSIs were extraluminally acquired and derived from the cutaneous microflora. Therefore strategies achieving successful suppression of cutaneous colonization can substantially reduce the risk of CVC-related BSI associated with short-term CVCs.

It is noteworthy that about 75% of CVCs removed because of suspected infection prove not to be infected. To limit wasteful removal of catheters Rijnders et al. [34] tested in a small RCT whether a “watchful waiting” strategy in which selected patients with low to moderate suspicion of CVC infection were observed without removal of the CVC is safe and allowed to retain the CVC. Hemodynamically stable patients without proven bacteremia, insertion site infection, or intravascular foreign body were randomized to a standard-of-care group (in which all CVCs were changed as planned) or the watchful waiting group (in which CVCs were changed only when bacteremia or hemodynamic instability subsequently occurred). Of 144 patients with suspected CRI, 80 patients met exclusion criteria (47 of whom were shown to be bacteremic, including 20, 25%, with CVC-related BSI), and 64 (44%) were randomized. All 38 CVCs were changed in the standard group vs. 16 of 42 in the watchful waiting group (62% reduction, p<0.01), with no difference in bacteremia rate or outcome of patients. The authors concluded that the use of a simple clinical algorithm permits a substantial decrease in the number of unnecessarily removed CVCs, without increased morbidity. In an accompanying editorial, Brun-Buisson [35] emphasizes the value of this conservative approach in selected patients with low/moderate suspicion of CVC infection and comments on the potential value of additional tests (skin site insertion swab culture, paired blood culture) to assist the clinical decision making in this conservative approach.

To identify factors associated with hospital outcome of adult patients with infective endocarditis (IE) Mourvilliers et al. [36] retrospectively reviewed 228 patients admitted over an 8-year period to their referral center ICU. The overall hospital mortality rate was 45%. In patients with native valve IE (n=146) variables associated with outcome by multivariate analysis included septic shock (OR 4.8, 95% CI 2.0–11.3, p=0.0003), cerebral emboli (OR 3, 95% CI 1.3–7.0, p=0.01), immunocompromised state (OR 2.9, 95%CI 1.1–7.3, p=0.03), and cardiac surgery (OR 0.47, 95% CI 0.2–1.0, p=0.05). In those with prosthetic valve IE (n=82) the variables included septic shock (OR 4.1, 95% CI 1.8–9.4, p=0.001), neurological complications (OR 3.1, 95% CI 1.4–8.6, p=0.008), and immunocompromised state (OR 3.5, 95% CI 1.1–7.3, p=0.003). The authors concluded that IE is associated with a high mortality rate in patients requiring ICU admission; although early complications make optimal medical and surgical management decisions often difficult, surgical treatment appears to improve outcome of patients.

Other nosocomial infections and antimicrobial use

In a questionnaire survey of French intensivists Azoulay et al. [37] addressed the difficult and unresolved question of the interpretation of a lower respiratory secretion sample positive for Candida spp. As expected, physicians’ attitudes varied widely. Although a majority felt that positive samples for Candida reflect colonization only in nonimmunocompromised patients, one-fourth (27%) were inclined to provide antifungal therapy; a majority (61%) felt that repeating samplings at various sites to calculate the “colonization index” was necessary. The authors concluded that additional studies are needed to improve our understanding of respiratory tract Candida colonization and infection in nonneutropenic MV patients and to determine the indications for preemptive antifungal therapy in this population.

Misset et al. [38] report the results of a 5-year infection control quality improvement program, based on published guidelines to reduce nosocomial infection rates in their medical-surgical ICU. Mean device-related infection rates (per 1,000 procedure-days) were: ventilator-associated pneumonia (VAP) 8.7, urinary tract infection (UTI) 17.2, CVC colonization 6.1, and CVC-related BSI 2.0. During the 5-year study period there was a significant decline in UTI and CVC-related infection rates and an increase in time to infection, but not of VAP rates. The authors concluded that UTI and CVC-related infections can be reduced through a continuous quality-improvement program based on surveillance of nosocomial infections.

To identify factors associated with high use of antimicrobials, Meyer et al. [39] reported results from the first 2 years in 35 ICUs of the SARI program, a surveillance system of antimicrobial use in a German ICU network set up in 2000. The mean antimicrobial use density (AD) was 1,332 DDD/1,000 patient-days and was correlated with length of stay. Penicillins plus β-lactamase inhibitor (AD 338.3) and quinolones (155.5) were the antimicrobials most used. Length of stay was an independent risk factor for an AD above the 75% percentile of the total amount of antimicrobials used (OR 1.96 per day) as well as for higher use of carbapenems (OR 1.90 per day) and extended-spectrum penicillins (OR 2.01 per day). High use of glycopeptides and quinolones (AD >75% percentile) was correlated with CVC. The authors suggested that the SARI data could serve as a benchmark by which to improve the quality of antimicrobial drug administration in ICUs and for international comparison.

Sepsis–HIV infection: clinical studies

Two large epidemiological studies of severe sepsis syndromes were published this year. To provide an updated epidemiology of severe sepsis, Brun-Buisson et al. [40] reported the results of a 2-week inception cohort study of severe sepsis and shock conducted in 206 randomly selected French ICUs. Of 3,738 patients admitted 546 (14.6%) had severe sepsis or shock, 30% of which cases were ICU acquired. The median SAPS II and Sequential Organ Failure Assessment (SOFA) at onset of severe sepsis were 48 and 9, respectively. Mortality was 35% and 41.9% at 30 and 60 days after sepsis, and 11.4% of patients remained hospitalized at 2 months. Chronic liver and heart failure, acute renal failure and shock, SAPS II at onset of severe sepsis and 24-h total SOFA scores were the independent risk factors most strongly associated with death. The authors concluded that whereas the attack rate of severe sepsis has increased in French ICUs over the past decade, its mortality appears to have decreased, suggesting improved management of patients.

Finfer et al. [41] reported a prospective population-based, inception cohort, incidence study conducted in 23 multidisciplinary ICUs of 21 hospitals in Australia and New Zealand, including 5,878 consecutive ICU admissions. A total of 691 patients—11.8 (95% CI 10.9–12.6) per 100 ICU admissions—had 752 episodes of severe sepsis. The ICU and 28-day mortality rates were 26.5% and 32.4%, respectively, and 37.5% of patients died in hospital. The authors estimated the incidence of severe sepsis in adults treated in Australian and New Zealand ICUs at 0.77 (0.76–0.79) per 1,000 inhabitants and concluded that the population incidence found in this prospective study falls in the lower range of recent estimates from retrospective studies in the United States and the United Kingdom. In an accompanying editorial Moss [42] highlights questions concerning the interpretation of epidemiological surveys and recent changes in the epidemiology of sepsis.

Finkielman et al. [43] report a 10-year, single-center, retrospective study of 63 patients (mean age 28 years) with septic abortion, a relatively rare condition nowadays. Their APACHE II mean score was 13.9 on admission. Acute renal failure developed in 73% (46 of 63) of patients, disseminated intravascular coagulation in 31% (15 of 49), and septic shock in 32% (20 of 63). Blood cultures were positive in 24% (15 of 62). Twelve patients died (19%). The authors concluded that, when requiring ICU admission, this preventable event remains associated with high morbidity and mortality.

The diagnostic value of C-reactive protein remains controversial. Sierra et al. [44] reexamined this question in a prospective observational study of 125 patients with SIRS (70 having subsequently confirmed and 55 without confirmed infection) and normal control subjects. Median C-reactive protein values on day 1 were lower in healthy subjects (0.21 mg/dl, 95% CI 0.21–0.4), patients with acute myocardial infarction (2.2 mg/dl, 95% CI 2.1–4.9), and those with noninfectious SIRS (1.7 mg/dl, 95% CI 2.4–5.5), than in those with sepsis (18.9, 95% CI 17.1–21.8, p<0.001). A C-reactive protein threshold value of 8 mg/dl had a 94.3% sensitivity and 87.3% specificity for predicting sepsis. The authors concluded that determination of serum C-reactive protein can be used as an early indicator of infection in patients with SIRS.

To determine the clinical impact of the recently available highly active antiretroviral therapy (HAART) on ICU admissions and outcome in patients infected with human immunodeficiency virus (HIV) Vincent et al. [45] compared patients admitted during a pre-HAART era (1995–1996; n=189) and the HAART era (1998–2000; n=236). During the latter 79% of patients admitted to the ICU had not or only little benefited from the availability of HAART: 44% had no history of antiretroviral therapy, and 35% had failed to respond to antiretroviral. The ICU admission rate of hospitalized HIV-infected patients increased rather than decreased compared with the pre-HAART era (HAART 5.9% vs. pre-HAART 4.4%, p=0.004). After adjustment for significant prognostic covariates ICU survival was unchanged between the two periods (adjusted OR 0.613, 95% CI 0.312–1.206), but 3-month survival had improved (adjusted OR 0.57, 95% CI 0.32–0.99, p=0.045). The authors concluded that the ICU admission rate of HIV-infected patients remains high in the HAART era, possibly because of underutilization of therapy and limited access to health care.

Ventilator-associated pneumonia

As indicated by Yu and Singh [46], “over 300 studies have been published in peer-review journals in the past 8 years dealing with management of ventilator-associated pneumonia (VAP).” However, no consensus exists to date on the best way for identifying patients with true lung infection, for selecting early appropriate antimicrobial therapy, or for avoiding unnecessary use of antibiotics.

Diagnosis

Controversies regarding the management of patients suspected of developing VAP have been nourished by numerous studies comparing different bacteriological diagnostic techniques, or clinical to bacteriological evaluation, and/or evaluating the Clinical Pulmonary Infection Score (CPIS) recently proposed as the first step of a “clinical strategy” [47].

Elatrous et al. [48] conducted a prospective study in 100 patients clinically suspected of 138 episodes of VAP to compare quantitative cultures of endotracheal aspirates (ETA) and plugged telescoping catheter. Pneumonia was diagnosed on positive cultures (≥ 103 cfu/ml) of the latter. The authors reported a good correlation between the two techniques for identifying bacterial species and differentiating between positive and negative cultures of the plugged telescoping catheter by using a diagnostic threshold for ETA of 104 cfu/ml. They calculated a sensitivity of 92% and a specificity of 85% for ETA. With the usual and widely used threshold of 106 cfu/ml for ETA, results were less good, with a poor agreement between the two techniques (κ=0.48). The authors concluded that ETA quantitative cultures are adequate techniques to identify pathogenic organisms in significant concentration in the lower respiratory tract but not to diagnose VAP since quantitative cultures of the plugged telescoping catheter are not a “gold standard” and even not a “silver standard” to differentiate lung infection from heavy colonization of the lower respiratory tract.

Mentec et al. [49] reported the results of a multicenter prospective study conducted in five French ICUs that enrolled 66 patients with suspected VAP, including 34 with “confirmed” VAP (based on the classification of the 1992 International Consensus Conference on the clinical investigation of VAP [50]). Four diagnostic techniques were compared: blind tracheal aspirates, blind protected telescoping catheter (blind PTC), bronchoscopic PTC, and bronchoscopic bronchoalveolar lavage (BAL). The authors found that direct examination of secretions obtained with blind PTC, bronchoscopic PTC, and BAL are of similar value for diagnosing VAP and choosing appropriate initial treatment. In contrast, they underlined that blind and bronchoscopic PTC had diagnostic values comparable to that reported with BAL only when the collected sample contains visible secretions; in the entire population, the areas under receiver operating characteristic curve were significant smaller with the three techniques (0.78, p=0.002 for blind tracheal aspirates; 0.83, p=0.009 for blind PTC; 0.85, p=0.01 for bonchoscopic PTC) than with BAL (0.98).

These two studies have the usual limitations of studies evaluating new diagnostic methods: the absence of clear and definitive definition of the disease, here the absence of a gold standard. Several published studies have tried to determine such a gold standard—experimental studies, animal studies, postmortem studies [51]—and suggested that only the combination of histological examination and quantitative cultures of lung tissue gives arguments strong enough to validate or eliminate the diagnosis of pneumonia in patients treated with MV for more than 3 days.

Dupont et al. [52] designed a study in 108 patients with 171 VAPs to assess the impact on the duration of MV and the use of antibiotic treatment of the results of a diagnostic technique: the percentage of infected cells in liquid obtained with BAL, i.e., the value of direct examination. In clinical practice the time of direct examination of pulmonary secretions is a very important issue since it corresponds to the time of the differentiation between infected and noninfected patients, the decision to treat or not, and the choice of initial antimicrobial therapy. The authors confirmed a strong relationship between the percentage of infected cells and the results of quantitative cultures of BAL. They also found two factors negatively associated with the percentage of infected cells: the duration of MV before the onset of VAP (12.2±12.1% before the 7th day, 7.4±9.2% between the 7th and the 14th day; and 4.8±6.4% after 15 days of MV; p=0.0002) and the ongoing use of antibiotics. They suggested that this diagnostic criteria should be analyzed with caution in patients receiving prior antibiotic therapy with clinical suspicion of late-onset VAP.

Schurink et al. [53] examined the accuracy of the CPIS for diagnosing VAP and evaluated interoberver variability in its calculation. They compared the scores of a slightly modified CPIS with results of quantitative cultures of BAL in 99 patients suspected of VAP. The diagnosis of VAP was based on a positive BAL (quantitative cultures growth ≥104 cfu/ml). For 52 patients the CPIS was calculated by two investigators. VAP was diagnosed in 69 patients. When using CPIS higher than 5 to diagnose VAP, the sensitivity of the score was 83% and the specificity only 17%. With a cutoff of 7 points the diagnostic values were lower (sensitivity 41%, specificity 77%). In addition, a major limitations was identified by the authors, compromising the wide use of CPIS in clinical practice, since the level of agreement between observers in measuring individual CPIS (≤ 6 vs. >6) was poor. They concluded that the low specificity and sensitivity of CPIS, combined with a considerable interobserver variability, do not permit to base a diagnostic strategy on such a score.

Luyt et al. [54] confirmed this observation in a retrospective cohort study conducted in 201 patients included in the “invasive strategy” group of the French multicenter randomized trial comparing two strategies (invasive vs. clinical) in the management of patients suspected of having VAP [10]. CPIS was calculated retrospectively with the data collected for the initial study on days 1 and 3 and was compared between patients with bacteriologically confirmed VAP (n=88) or not (n=113). On day 1 CPIS was similar in the two groups (6.4±1.4 vs. 6.2±1.6 with and without VAP, respectively, NS). On day 3 the 138 patients (69%) had a CPIS higher than 6. Based on the algorithm described by Singh et al. [47] these patients would have required prolonged antimicrobial therapy. Compared with a strategy based on bronchoscopy, 60 patients without bacteriologically confirmed VAP would have been unnecessarily treated, and 10 patients with VAP would have not received antibiotics after day 3 for a total of 35% of patients incorrectly managed.

For more than 20 years clinical evaluation including temperature, macroscopic aspect of tracheal secretions, leukocytosis, and chest radiography has been repeatedly identified as at least a nonoptimal way to diagnose pneumonia in patients treated with MV [51]. CPIS, which is no more than the quantification of a clinical evaluation, does indeed constitute significant progress, particularly when it is included in a management algorithm [47]. As indicated by Yu and Singh [46] in their editorial, the use of such an algorithm based on CPIS resulted in limiting the number and duration of antibiotics, reducing incidence of infections due to multiresistant bacteria, shortening duration of stay, and lowering 30-day mortality rate. It is therefore possible to reduce antibiotic use without deleterious consequences for MV-treated patients treated with clinical signs suggesting the development of VAP. Furthermore the authors clearly stated that excessive broad-spectrum therapy leads to greater emergence of multiply-resistant organisms and increases mortality and morbidity. Limiting the use of antibiotics in patients with CPIS lower than 6 is a very important first step of a “revisited” strategy. Yu and Singh reported important data concerning this strategy. During the 3 years following the implementation of the Singh protocol they had not experienced “a single case of a patient with an invasive infection leading to death in a patient with CPIS <6 receiving 3 day monotherapy.” Thus the relevant questions are: Is this patient developing VAP? Does the patient require antibiotics? To answer to these questions, it is possible to propose at least two new randomized trials: first, according to the authors, a comparison of a short-course monotherapy vs. no antibiotics in patients with low (<6) CPIS and, second, a comparison of the Singh protocol with a bacteriological strategy already evidenced as superior to the “classic” clinical strategy [55].

Treatment

Adequacy of initial antibiotic therapy is recognized as a major prognostic factor in patients with VAP [51]. This statement seems obvious; it is also the rationale for giving broad-spectrum antibiotics in all patients with clinical suspicion of VAP, ignoring the risk of emergence of resistance, over morbidity and mortality associated with the absence of control of antibiotic prescriptions. Thus the relationship between appropriateness of treatment and outcome needs to be clarified.

Clec’h et al. [56] conducted a study in 142 patients with bacteriologically confirmed VAP to test the hypothesis of a link between adequacy of treatment, severity of illness at VAP onset, and outcome. The rate of adequate antibiotic therapy was 44.4% on day 0 and 92% on day 2. No significant difference in mortality was observed with and without adequate initial treatment in the entire population. However, in 70 patients (49%) with low severity level (defined by a logistic organ dysfunction score ≤ 4), inadequate therapy was clearly associated with higher ICU mortality (37%, vs. 7% in patients appropriately treated, p=0.006). Pending confirmatory studies these results potentially have major impact on the management of patients suspected of having VAP: the appropriate choice of antibiotic(s) is of particular importance in the group of patients with low severity level at the time of the onset of clinical signs of infection. It is reasonnable to perform accurate bacteriological investigations in this population to guide the choice of initial treatment.

Development and correct evaluation of new antibiotics is one of the cornerstones for improving therapeutic management in patients with VAP. One current and increasing difficulty is the treatment of infections due to multiresistant bacteria such as Pseudomonas aeruginosa, Acinetobacter spp., Stenotrophomonas maltophilia, β-lactamase secretor Enterobacteriaceae, and methicillin-resistant Staphylococcus aureus (MRSA). To date the treatment of MRSA is based on the use of glycopeptides.

Linezolid, a new oxazolidinone, is a potential alternative to vancomycin for the treatment of severe infections due to MRSA. Kollef et al. [57] reported the results of a retrospective analysis of a subgroup of patients enrolled in two randomized double-blind trials comparing 600 mg linezolid to 1 g vancomycin every 12 h. Among 544 patients with suspected Gram-positive VAP, 91 had VAP due to MRSA. In this subgroup clinical cure rates were 62.2% with linezolid (vs 21.2% with vancomycin, p=0.001), bacterial eradication rates 60.5% (vs 22.9%, p=0.001), and survival rates 84.1% (vs 61.7%, p=0.02). Logistic regression analysis identified linezolid as one of the independent factors associated with survival in patients with MRSA VAP (OR 4.6, p=0.01), with APACHE II score of 20 or less, presence of pleural effusion, and absence of bacteremia.

In their editorial Ioanas and Lode [58] suggested that linezolid may already be a better choice than vancomycin in VAP due to MRSA. The major reason is that vancomycin is a “modest drug” for the treatment of lung infection due predominantly to the extremely poor penetration of vancomycin in the epithelial lining fluid and to the frequent inadequate serum levels. However, linezolid-resistant strains of S. aureus have already been reported in the United States and United Kingdom. As a conclusion, the authors recommended that (a) linezolid be used with caution and (b) parallel strategies such as antibiotic rotations, restricted use of antibiotics, hygiene measures and cohorting be adapted to diminish antibiotic-selective pressure and to decrease infections by resistant S. aureus.

Prevention

To design strategies of prevention of VAP, Valles et al. [59] conducted a study to identify routes and patterns of colonization with P. aeruginosa. Ninety-eight intubated patients ventilated longer than 3 days were investigated; authors collected samples from the tap water of the room, stomach, oropharynx, subglottic secretions, trachea, and rectum at the time of intubation and three times per week. They observed colonization in 54.2% of patients and tracheal colonization in 30.5%. Ten patients had tracheal colonization at intubation, and four developed VAP. P. aeruginosa was isolated in 62.4% of samples of the room’s tap water; however, identified pulsotypes were rarely associated with VAP. The authors concluded that colonization with P. aeruginosa was endogenous and exogenous. As a consequence they suggested the combination of prophylactic measures avoiding airway colonization and infection control measures to reduce cross-contamination.

Subglottic suctioning and semirecubent positioning have been proposed to prevent VAP [60]. Girou et al. [61] designed a randomized trial to evaluate the impact of the two measures on tracheal colonization in patients receiving long-term MV. Oropharyngeal and tracheal secretions were collected daily (40 samples in the eight patients of the suctioning group and 57 in the ten patients of the control group). Comparing patients receiving these measures to patients receiving standard care and supine position, the authors identified no differences in bacterial counts in the trachea (6.6 log 10 cfu/ml in the suctioning group vs. 5.1 log 10 cfu/ml in the control group), colonization on day 1 (75% vs. 80%, respectively), or in the daily bacterial count in the oropharynx and in the trachea. VAP rate was similar in both groups. They concluded that continuous subglottic suctioning and semirecubent position do not reduce tracheal colonization in long-term MV patients. The true impact of such prophylactic measures needs to be evaluated more precisely.

Inflammatory response

In contrast to community-acquired pneumonia, lung inflammatory response has been poorly investigated in patients with VAP. Millo et al. [62] examined whether cytokine concentrations change in the lungs of patients with VAP. They investigated the lungs by using nondirected bronchial lavage and performed serial cytokines and cytokine inhibitors measurements in 9 patients with VAP and 19 patients without VAP. They observed no modifications in plasma concentrations of cytokines and cytokine inhibitors. In nondirected bronchial lavage fluid the concentrations of tumor necrosis factor α, tumor necrosis factor α receptor 1, and interleukins 1 α, 1 β, and 6 increased significantly in patients with VAP. The authors concluded that cytokines and cytokine inhibitor production are compartmentalized in the lung of patients who develop VAP.

El-Solh et al. [63] evaluated the impact of inadequate antimicrobial therapy on procoagulant and fibrinolytic activity in 29 patients with bacteriologically confirmed VAP and 8 control patients treated with MV. Seven patients were not treated appropriately; thrombin-antithrombin complex, prothrombin activation fragment, and plasminogen activator inhibitor type 1 levels were higher than in those who received adequate treatment. Despite antibiotic adjustment on day 4 thrombin-antithrombin complex levels remained elevated and were correlated with P/F ratio. Such a procoagulant activity was accompanied by a local depression of fibrinolytic capacity. No correlation was found between the bacterial burden and the homeostatic derangements. The authors suggested that the lung inflammatory response in patients with inadequately treated VAP induces a local procoagulant activity associated with hypoxemia.