Next Article in Journal
Nanoporous Gold Films Prepared by a Combination of Sputtering and Dealloying for Trace Detection of Benzo[a]pyrene Based on Surface Plasmon Resonance Spectroscopy
Previous Article in Journal
Metallic Glass/PVDF Magnetoelectric Laminates for Resonant Sensors and Actuators: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Piezoelectric Sensor System for Damage Identification in Structures Subjected to Temperature Changes

1
Control, Dynamics and Applications (CoDAlab), Departament de Matemàtiques, Escola d’Enginyeria de Barcelona Est (EEBE), Universitat Politècnica de Catalunya (UPC), Campus Diagonal-Besòs (CDB), Eduard Maristany, 6–12, Sant Adrià de Besòs (Barcelona) 08930, Spain
2
MEM (Modelling-Electronics and Monitoring Research Group), Faculty of Electronics Engineering, Universidad Santo Tomás, Bogotá 110231, Colombia
3
Departamento de Ingeniería Eléctrica y Electrónica, Universidad Nacional de Colombia, Cra 45 No. 26-85, Bogotá 111321, Colombia
4
Facultad de Ingeniería, Fundación Universitaria Los Libertadores, Carrera 16 No. 63A-68, Bogotá 111221, Colombia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2017, 17(6), 1252; https://doi.org/10.3390/s17061252
Submission received: 24 April 2017 / Revised: 25 May 2017 / Accepted: 26 May 2017 / Published: 31 May 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
Structural health monitoring (SHM) is a very important area in a wide spectrum of fields and engineering applications. With an SHM system, it is possible to reduce the number of non-necessary inspection tasks, the associated risk and the maintenance cost in a wide range of structures during their lifetime. One of the problems in the detection and classification of damage are the constant changes in the operational and environmental conditions. Small changes of these conditions can be considered by the SHM system as damage even though the structure is healthy. Several applications for monitoring of structures have been developed and reported in the literature, and some of them include temperature compensation techniques. In real applications, however, digital processing technologies have proven their value by: (i) offering a very interesting way to acquire information from the structures under test; (ii) applying methodologies to provide a robust analysis; and (iii) performing a damage identification with a practical useful accuracy. This work shows the implementation of an SHM system based on the use of piezoelectric (PZT) sensors for inspecting a structure subjected to temperature changes. The methodology includes the use of multivariate analysis, sensor data fusion and machine learning approaches. The methodology is tested and evaluated with aluminum and composite structures that are subjected to temperature variations. Results show that damage can be detected and classified in all of the cases in spite of the temperature changes.

1. Introduction

The variability in the dynamic properties of a structure in service can be the result of time-varying environmental and operational conditions [1]. This variability is mainly one of the causes of an inaccurate damage identification process when the analysis of a structure is performed based on data-driven algorithms [2]. From this point of view, it is possible to affirm that the variability of environmental and operational conditions is one of the intrinsic features of the design of a structural health monitoring system [3].
There are many magnitudes to consider in the design of a structural health monitoring system; for instance, temperature, temperature gradients, humidity and wind or traffic [4]. When these factors are not considered, the results may mask or conceal the changes of the structure. Therefore, the diagnosis provided by the structural health monitoring system will not be accurate. For this reason, when designing a structural health monitoring system, it is very important to propose algorithms or methodologies that can cope with these environmental and operational conditions. The final goal is to offer an accurate damage identification process even improving the security of the structure and reducing the time and cost of its associated maintenance [5].
At present, it is possible to find some works in the literature that consider the effect of the environmental and operational variations. One of the most applied strategies to deal with these kinds of variations is principal component analysis (PCA). One of the main advantages of PCA is its ability to reduce the dimensionality of data, which is particularly useful when these data are collected from multiple sensors. In this sense, multivariate analysis has been proven to be effective for damage detection and classification [6,7]. In the same way, PCA is useful to perform linear analysis when it is assumed that the effect on the vibration features of the structure of the environmental conditions is linear or weakly non-linear [4]. PCA has also been used in combination with some other techniques or strategies. For instance, Torres–Arrendondo et al. [8] considers jointly discrete wavelet transform for feature extraction and selection, linear principal component analysis, for data-driven modeling, and self-organizing maps, for a two-level clustering under the principle of local density, for temperature compensation in acousto-ultrasonics. Leichtle et al. [9] apply principal component analysis jointly with k-means clustering for discrimination of changed and unchanged buildings as a method for unsupervised change detection in a dynamic urban environment. PCA has also been applied as a way to characterize the feature vector that defines the antigens and the antibodies in an artificial immune system conceived of to detect damage in structures under temperature variations [10]. The robust version of singular value decomposition (SVD), which is closely related to principal component analysis, has been used in [11] to compute the distance of an observation to the subspace spanned by the intact measurements. The distance to the subspace is therefore used to determine the presence of damage.
Structural health monitoring strategies that do not consider principal component analysis include, for instance, the work by Deraemaeker et al. [12], where the damage detection strategy is uniquely based on vibration measurements under changing environmental conditions. More precisely, two features are considered based on the measurements: the eigen-properties of the structure and peak indicators that are computed on the Fourier transform (FT) of modal filters. The effects of the changing environment are handled using factor analysis, and damage is detected by means of Shewhart-T control charts. Similarly, Balmès et al. [13] propose a nonparametric damage detection where it is assumed that several datasets are recorded on the safe structure at different and unknown temperatures. Finally, the approach smooths out the temperature effect using an averaging operation.
Buren et al. [14] address the damage detection problem combining three technologies to guarantee the robustness of a structural condition monitoring system subjected to environmental variability. One of these technologies is a time series algorithm that is trained with baseline data with three objectives: (a) to predict the vibration response; (b) to compare predictions to actual measurements collected on a damaged structure; and (c) to calculate a damage indicator. In this work [14], the robustness analysis is performed propagating the uncertainty through the time series algorithm and computing the equivalent deviation of the damage indicator.
Similar to PCA, time series analysis can also be combined with some other strategies, such as neural networks and statistical inference to develop damage classification strategies, including ambient variations of the system. For instance, Sohn et al. [15] developed an autoregressive and autoregressive with exogenous inputs (AR-ARX) model of the structure to extract damage-sensitive features, then used a neural network for data normalization and finally applied hypothesis testing to automatically infer the damage state of the system.
Several machine learning approaches have already been reported in the literature. For instance, it is possible to find the use of an auto-associative neural network (AANN), factor analysis, Mahalanobis distance and singular value decomposition [16] tested in a three-story frame structure where data are collected with accelerometers. Support vector machines (SVM) have also been applied for damage detection, localization and damage assessment in a Gnat trainer aircraft [17] showing the advantages in the use of machine learning approaches for damage identification. Some of these works use novelty detectors based on outlier analysis, density estimation and an auto-associative neural network [18,19] for these applications. Unsupervised machine learning algorithms and physics-based temperature compensation were also explored by Roy et al. [20]. More precisely, Roy et al. [20] use a neural network-based sparse autoencoder algorithm to learn the compressed representation of the data from sensors in order to localize damages in a structure with the Mahalanobis squared distance.
Previous works by the authors in this field include the use and development of multivariate analysis techniques, such as linear principal component analysis (PCA), non-linear PCA [7] and independent component analysis (ICA) to detect [21], classify and localize damage in structures [22]. In this paper, we present a structural health monitoring system based on [23] that is oriented to detect and classify the damage of a structure subjected to temperature variations. The system works with data collected from a piezoelectric sensor network attached permanently to the structure, and it introduces the use of a new way to organize the data, multivariate data analysis techniques and machine learning analysis. Some contributions of this system are the use of sensor data fusion, which introduces a different organization of the data, and the feature extraction vector for including temperature during the training process. This is a multivariate approach. This means that in the analysis, there are measurements from all of the sensors distributed all along the structure, which offers a generalized analysis from different points of view by fusing data in the only result. This procedure allows reducing the effect of the temperature in the damage detection and classification process when machine learning approaches are applied.
The structure of the paper is as follows: in Section 2, a brief description of the theoretical background required to construct the SHM system is presented. This background includes principal component analysis and machine learning approaches with special focus on how the three-way matrix with the collected data is unfolded to a two-way array. Section 3 describes the SHM system that is used to inspect the structures and the strategies that are applied to classify the damage in structures subjected to temperatures changes. In Section 4, the experimental setup is introduced together with exhaustive results. Finally, in Section 5, some concluding remarks are discussed.

2. Theoretical Background

2.1. Principal Component Analysis

One of the greatest difficulties in data analysis arises when the amount of data is very large and there is no apparent relationship between all of the information or when this relationship is very difficult to find. In this sense, principal component analysis was born as a very useful tool to reduce and analyze a big quantity of information. Principal component analysis was described for the first time by Pearson in 1901, as a tool of multivariate analysis and was also used by Hotelling in 1933 [24]. This method allows finding the principal components, which are a reduced version of the original dataset and include relevant information that identifies the reason for the variation between the measured variables. To find these variables, the analysis includes the transformation of the data with respect to a current coordinate space to a new space in order to re-express the original data trying to reduce, filter or eliminate the noise and possible redundancies. These redundancies are measured by means of the correlation between the variables [25].
There are two mechanisms to implement the analysis of principal components: (i) the first method is based on correlations; and (ii) a second strategy is based on the covariance. It is necessary to highlight that PCA is not invariant to scale, so the data under study must be normalized. Many methods can be used to perform this normalization, as is shown in [25,26]. In many applications, PCA is also used as a tool to reduce the dimensionality of the data. Currently, there are several useful toolboxes that implement PCA and analyze the reduced data provided by this strategy [27]. For the sake of completeness, the following sections present a succinct description of the PCA modeling that includes how the measured data are arranged in matrix form. We also present the normalization procedure (group scaling) and how the new data to inspect are projected onto the PCA model.

2.1.1. PCA Modeling

As stated in Section 2.1, one of the considerable difficulties in data analysis emerges when the quantity of data is very large. In a general case, typical data from a batch process may consist of N variables measured at L time instants for n batches or experimental trials. These data can be easily arranged in a three-way matrix Z M n × N × L ( R ) as represented in Figure 1 (top, left). However, to apply multivariate statistical techniques, such as principal component analysis (PCA), this three-way matrix Z must be unfolded to a two-way array. Westerhuis et al. [28] discussed profoundly how to unfold this three-way matrix and what were the effects of data normalization on the multivariate statistical techniques. One of the possibilities that is presented in [28] is depicted in Figure 1 (right), where the three-way matrix Z M n × N × L ( R ) is unfolded to a two-way matrix with n · L rows and N columns. This way, each of the N columns in the unfolded matrix still represents the N variables that are measured in the process.
However, in our application, we propose a quite different approach to unfold the original three-way matrix Z . As can be observed in Figure 1 (bottom, left), the three-way matrix Z M n × N × L ( R ) is unfolded to a two-way matrix with n rows and N · L columns. This way, the columns of the unfolded matrix no longer represent the variables, but the measures of the variables at the different time instants. More precisely, the submatrix defined by taking the n rows and the first L columns represent the discretized measures of the first variable for the n batches or experimental trials; similarly, the submatrix defined by taking the n rows and columns L + 1 to 2 L represent the discretized measures of the second variable for the n batches or experimental trials. In general, then, the submatrix defined by taking the n rows and columns ( l 1 ) · L + 1 to l · L represents the discretized measures of the l-th variable for the n batches or experimental trials.
The first step to build a PCA model is to measure, from a healthy structure, different sensors or variables during ( L 1 ) Δ seconds, where Δ is the sampling time, and n N experimental trials. The discretized measures of the sensors can be unfolded and arranged in matrix form as follows:
X = x 11 1 x 12 1 x 1 L 1 x i 1 1 x i 2 1 x i L 1 x n 1 1 x n 2 1 x n L 1 | x 11 2 x 1 L 2 x i 1 2 x i L 2 x n 1 2 x n L 2 | | x 11 N x 1 L N x i 1 N x i L N x n 1 N x n L N M n × ( N · L ) ( R ) = X 1 | X 2 | | X N
where M n × ( N · L ) ( R ) is the vector space of n × ( N · L ) matrices over R and N N is the number of sensors. It is worth noting that each row vector X ( i , : ) R N · L , i = 1 , , n of matrix X in Equation (1) represents the measurements from all of the sensors at a given experimental trial. Similarly, each column vector X ( : , j ) R n , j = 1 , , N · L , contains measurements from one sensor at one specific time instant in the whole set of experimental trials.
As stated before, one of the goals of PCA is to eliminate the redundancies in the original data. This objective is achieved through a linear transformation orthogonal matrix:
P M ( N · L ) × ( N · L ) ( R )
that is used to transform or project the original data matrix X in Equation (1) according to the matrix product:
T = XP M n × ( N · L ) ( R )
where the resulting matrix T has a diagonal covariance matrix.

2.1.2. Normalization: Group Scaling

Since the data in matrix X come from several sensors and could have different magnitudes and PCA is not invariant to scale, a preprocessing stage must be applied to rescale the data. This normalization is based on the mean of all measurements of the sensor at the same time instant and the standard deviation of all measurements of the sensor. In this sense, for k = 1 , , N , we define:
μ j k = 1 n i = 1 n x i j k , j = 1 , , L ,
μ k = 1 n L i = 1 n j = 1 L x i j k ,
σ k = 1 n L i = 1 n j = 1 L ( x i j k μ k ) 2 ,
where μ j k is the mean of the measures placed at the same column, that is the mean of the n measures of sensor k in matrix X k at time instants j 1 Δ seconds; μ k is the mean of all of the elements in matrix X k , that is the mean of all of the measures of sensor k; and σ k is the standard deviation of all of the measures of sensor k. Then, the elements x i j k of matrix X are scaled to define a new matrix X ˇ as:
x ˇ i j k : = x i j k μ j k σ k , i = 1 , , n , j = 1 , , L , k = 1 , , N .
For the sake of simplicity, the scaled matrix X ˇ is renamed again as X . One of the properties of the scaled matrix X is that it is mean-centered [29]. Consequently, the covariance matrix of X can be defined and computed as:
C X = 1 n 1 X T X M ( N · L ) × ( N · L ) ( R ) .
The subspaces in PCA are defined by the eigenvectors and eigenvalues of the covariance matrix as follows:
C X P = P Λ
where the columns of P M ( N · L ) × ( N · L ) ( R ) are the eigenvectors of C X and are defined as the principal components. The diagonal terms of matrix Λ M ( N · L ) × ( N · L ) ( R ) are the eigenvalues λ i , i = 1 , , N · L , of C X , whereas the off-diagonal terms are zero, that is,
Λ i i = λ i , i = 1 , , N · L
Λ i j = 0 , i , j = 1 , , N · L , i j
The goal of principal component analysis is two-fold; on the one hand, to eliminate the redundancies of the original data. This is achieved by transforming the original data through the projection defined by matrix P in Equation (7). On the other hand, the second goal is to reduce the dimensionality of the dataset X . This second objective is achieved by selecting only a limited number < N · L of principal components related to the highest eigenvalues. In this manner, given the reduced matrix:
P ^ = p 1 | p 2 | | p M N · L × ( R ) ,
matrix T ^ is defined as:
T ^ = X P ^ M n × ( R ) .

2.1.3. Projection of New Data onto the PCA Model

The current structure to inspect is excited by the same signal as the one that excited the healthy one in Section 2.1.1. Therefore, when the measures are obtained from N N sensors during ( L 1 ) Δ seconds and ν N experimental trials, a new data matrix Y is constructed as in Equation (1):
Y = y 11 1 y 12 1 y 1 L 1 y i 1 1 y i 2 1 y i L 1 y v 1 1 y v 2 1 y v L 1 | y 11 2 y 1 L 2 y i 1 2 y i L 2 y v 1 2 y v L 2 | | y 11 N y 1 L N y i 1 N y i L N y v 1 N y v L N M ν × ( N · L ) ( R )
It is worth noting, at this point, that the natural number ν (the number of rows of matrix Y ) is not necessarily equal to n (the number of rows of X ), but the number of columns of Y must agree with that of X ; that is, in both cases, the number N of sensors and the number of time instants L must be equal.
Before the collected data arranged in matrix Y are projected into the new space spanned by the eigenvectors in matrix P in Equation (7), the matrix has to be scaled to define a new matrix Y ˇ as in Equation (5):
y ˇ i j k : = y i j k μ j k σ k , i = 1 , , ν , j = 1 , , L , k = 1 , , N ,
where μ j k and σ k are the real numbers defined and computed in Equations (2) and (4), respectively.
The projection of each row vector r i = Y ˇ ( i , : ) R N · L , i = 1 , , ν of matrix Y ˇ into the space spanned by the eigenvectors in P ^ is performed through the following vector to matrix multiplication:
t i = r i · P ^ R .
For each row vector r i , i = 1 , , ν , the first component of vector t i is called the first score or Score 1; similarly, the second component of vector t i is called the second score or Score 2, and so on.

2.2. Machine Learning

Machine learning has revolutionized the way that complex problems have been tackled with the help of computer programs. In the incessant and relentless pursuit of the best tools for data analysis, machine learning has been highlighted for its capability for providing a quite remarkable set of strategies for pattern recognition. More precisely, when a deterministic mathematical model is difficult to define and data have, at first glance, no correlation, these pattern recognition techniques are generally able to find some kind of relationship. Machine learning strategies and bio-inspired algorithms allow avoiding this difficulty through mechanisms designed to find the answer by themselves. In SHM or related areas, it is possible to find some applications about how machine learning has been used to detect problems such as breaks, corrosion, cracks, impact damage, delamination, disunity and breaking fibers (some pertinent to metals and the others to composite materials) [30]. In addition, machine learning has been also used to provide information about the future behavior of a structure under extreme events such as earthquakes [31].
Depending on how the algorithms are implemented, machine learning can be classified into two main approaches: unsupervised and supervised learning. In the first case, the information is grouped and interpreted using uniquely the input data. However, to perform the learning task in the second case, information about the output data is required. Figure 2 shows this classification and includes information about the kind of tasks that can be performed: clustering, classification and regression.
This paper is focused on the use of supervised learning approaches and, particularly, in the use of nearest neighbor classification, decision trees and support vector machines (SVM). A brief description of the nearest neighbor pattern classification, decision tress and support vector machines is introduced in the following subsections.

2.2.1. Nearest Neighbor Pattern Classification

The nearest neighbor (NN) is a simple nonparametric and highly efficient technique [32] that has been used in several areas such as pattern recognition, ranking models or text categorization and classification for big data [33,34], just to name a few. One of the most used algorithms in machine learning applications is the k-NN, also known as k-nearest neighbors. k-NN stands out due to its simplicity and the excellent results obtained when this technique is applied to diverse problems [35]. This algorithm works by using an input vector with the k closest training samples in the feature space. To perform the classification, the algorithm identifies the most common class among the k nearest neighbors. The algorithm requires a training to define the neighbors based on the distance from the test sample and a testing step to determine the class to which this test sample belongs [35].
The number of neighbors can be changed to adjust the k-NN algorithm. In this sense, for instance, the use of one neighbor is known as fine k-NN, and a coarse k-NN uses 100 neighbors. Many neighbors can be time consuming to fit. There are six different k-NN classifiers available in MATLAB that can be used to classify data [36], and these classifiers are based on different distances. Some of them—fine, medium and coarse k-NN algorithms—make use of the Euclidean distance to determine the nearest neighbors. According to MATLAB, each classifier works as follows [35]:
  • Fine k-NN: a nearest neighbor classifier that makes finely-detailed distinctions between classes with the number of neighbors set to one.
  • Medium k-NN: a nearest neighbor classifier with fewer distinctions than a fine k-NN with the number of neighbors set to 10.
  • Coarse k-NN: a nearest neighbor between classes, with the number of neighbors set to 100.
  • Cosine k-NN: a nearest neighbor classifier that uses the cosine distance metric. The cosine distance between two vectors u and v is defined as:
    1 u · v | u | · | v | ,
    that is, one minus the ratio of the inner product of u and v over the product of the norms of u and v.
  • Cubic k-NN: a nearest neighbor classifier that uses the cubic distance metric. The cubic distance between two n-dimensional vectors u and v is defined as:
    i = 1 n | u i v i | 3 3 .
  • Weighted k-NN: a nearest neighbor classifier that uses distance weighting. The weighted Euclidean distance between two n-dimensional vectors u and v is defined as:
    i = 1 n w i ( x i y i ) 2 ,
    where 0 < w i < 1 and i = 1 n w i = 1 .
k-NN has been used successfully in fault detection for gas sensor arrays [33], classification for big data [37], fault detection and classification for high voltage DC transmission lines [35] and traffic state prediction [38], among others.

2.2.2. Decision Trees

These machine learning methods are non-parametric computationally-intensive methods [39] that can be applied to regression and classification problems and can work with datasets with a large amount of cases and variables [40]. In general, these methods work by segmenting the predictor space into a number of simple regions.
Some of the advantages and disadvantages of these methods are:
  • Compared with other machine learning methods, trees are simple and easy to understand.
  • Decision trees use different methods and can be combined to obtain a single prediction.
  • The combination of different trees usually produces better results.
  • Because of its simplicity, more elaborated methods can produce better results in classification and regression tasks.
Different techniques have been proposed, among them bagging or bootstrap and boosting stand out. In the first, many bootstrap samples are obtained from the data; some prediction method is applied to each bootstrap sample; and then, the results are combined. In the regression case, the combination of the results is performed by averaging, while simple voting is used for classification [39]. Bagging is a committee-based approach that uses a prediction method and the weighted average of the results to obtain an overall prediction.

2.2.3. Support Vector Machines

Support vector machines (SVM) are supervised methods commonly used for regression and classification tasks [41]. In the case of classification, SVM creates a maximum-margin hyperplane that separates all data points from different classes. The support vectors corresponds to the data points that are closest to the separating hyperplane.

3. Damage Classification Methodology

In an automated structural health monitoring system, the monitoring system should decide autonomously whether the host structure is damaged or not [7]. With this purpose in mind, this work proposes a damage classification methodology for structures that are subjected to temperature changes. This strategy is described in the following sections.

Data Acquisition System

The methodology uses data from a structure instrumented with a piezoelectric transducer network. Figure 3 shows the scheme of the data acquisition system where it can be observed that the sensors are attached to the structure. Each piezoelectric transducer (PZT) can operate as an actuator or as a sensor in several actuation phases. Each actuation phase defines a particular piezoelectric as an actuator, and therefore, this PZT excites the structure with a given excitation signal. The rest of the PZT acts as a sensor in such a way that the measured and discretized signals are organized as described in Section 2.1.1, ready to be used in the classification algorithms. The number of actuation phases corresponds to the number of piezoelectric transducers installed in the structure.
The use of piezoelectric transducers is justified by the fact that this kind of sensor is able to produce Lamb waves through the excitation of an actuator with an arbitrary waveform, as is represented in Figure 4. At the same time, the propagated wave, with information about the state of the structure at different locations, is collected by the rest of sensors, since piezoelectric transducers can sense the propagated lamb waves and the information can be captured by a digitizer card. The proposed SHM system is able to work with an arbitrary wave generator, a digitizer card, a personal computer (PC) and a multiplexor card to select the actuator/sensors in each actuation phase.
Figure 5 can be used as a schematic representation of the way data are collected and organized, also showing the way the data are collected and organized for each actuation phase. That is, in the actuation Phase 1, Sensor 1 is used as an actuator, and the measured data from the sensors 2 , 3 , , N is captured and organized. In the example represented in Figure 5, four piezoelectric transducers are used. The procedure, however, is identical in the case of a different number of piezoelectric transducers.
To include the effect of the temperature in the proposed methodology, data from each temperature has to be considered. In this specific case, the system requires data from all of the structural states (without damage, Damage 1, Damage 2 and Damage 3, for instance) to consider in the classification under a wide range of temperatures ( T 1 , , T M ). Each temperature defines a submatrix where the rows represents the different structural states and columns the different actuation phases. Figure 5 represents an example with four structural states (no damage, Damage 1, Damage 2 and Damage 3), four actuation phases and M temperatures.
After the organization of the data for each actuation phase, the methodology considers two general steps or phases: (a) training; and (b) testing. During the training step, data from the healthy or pristine structure subjected to different temperatures are used to train the machines. Figure 6 includes a representation of the steps that are needed between the data acquisition and the machine training. These steps include a data normalization as in Section 2.1.2 [42,43] and principal component analysis (PCA). In this case, we consider the projection onto the first two principal components (scores) as the input to train the machine. The trained machine is then considered as the pattern.
The testing step considers the use of new data coming from the structure to be diagnosed in an unknown state. These collected data are pre-processed in an identical manner as the data collected from the pristine structure. This means that these data are normalized, and then, the normalized data are projected onto the first two principal component of the PCA model. Finally, the pattern defined by the trained machine will be able to predict the current state of the structure, as depicted in Figure 7.

4. Experimental Setup and Results

In this paper, two specimens (structures) are used to explore and demonstrate the feasibility of the structural health monitoring system, for damage identification in structures subjected to temperatures changes, introduced in Section 3. These two specimens are:
(i)
an aluminum plate with four piezoelectric transducers; and
(ii)
a composite plate of carbon fiber polymer with six piezoelectric transducers.
These two specimens differ in the kind of material, size and number of sensors used. In both cases, the same data acquisition sub-system is used as is represented in Figure 3.

4.1. First Specimen: Aluminum Plate

The first specimen that we consider in this paper is an aluminum plate with an area of 40 cm × 40 cm that is instrumented with four piezoelectric sensors. The distribution of the piezoelectric transducers and the size and geometry of the specimen are shown in Figure 8. This figure also indicates the location of the three damages that are presented in the structure.
To test the structure under different environmental conditions and, more precisely, under different temperatures temperatures, an incubator or climatic chamber (Faithful, Model HWS-250BX) is used to apply these variations. A picture of the aluminum plate inside the chamber can be found in Figure 9.
The experimental setup includes testing with five different temperatures:
  • T 1 = 10 ;
  • T 2 = 20 ;
  • T 3 = 30 ;
  • T 4 = 40 ; and
  • T 5 = 45 .
For each one of these five temperatures, data from each structural state are captured. In this case, we have considered four different structural states:
  • no damage (healthy or pristine structure);
  • Damage 1;
  • Damage 2; and
  • Damage 3.
The location of the three damages that are presented in the structure can be found in Figure 8. Figure 10 shows the experimental setup for the four different structural states. As can be observed, the damage is simulated in the structure, in a non-destructive way, as an added mass. The added mass is a magnet, which is attached in both sides of the structure to ensure the position; because aluminum is non-magnetic, the main aspect of this kind of damage is to change the properties of the structure and produce changes in the propagated wave.
It is well known that temperature changes affect the overall behavior of the Lamb waves. More precisely, these changes affect how the Lamb waves propagate, the velocity of the wave over the surface [44] and even the adhesive used to fix the sensors [45]. A very detailed study on the temperature effects in ultrasonic Lamb waves can be found in the work by Lanza di Scalea and Salamone [46]. One of the main conclusions of this work is that the temperature has an imperceptible effect on the wavelength tuning points and a pronounced effect on the response amplitude. In this sense, the goal of the proposed methodology is to include these variations in the structural health monitoring system to avoid false alarms and missing faults in the identification process.
The effect of the temperature changes can be perfectly illustrated in Figure 11, where the time-history signal that is received by Sensor 2 when the first sensor is used as an actuator is depicted, for three different temperatures. From this figure, it is possible to observe that changes in the temperature imply changes in the waveforms. More precisely, variations in the phase and amplitude can be easily detected, but some other and more complex changes can also be present [47]. Figure 12 and Figure 13 show the signals received by Sensors 3 and 4, respectively, when the first piezoelectric transducer is used as an actuator. Inspecting both figures, as in Figure 11, there is a clear effect of the temperature with respect to the phase and amplitude of the measured signals. It is worth keeping in mind that the distance between Sensors 1–2 and Sensors 1–4 is equal, while the distance between Sensors 1–3 is relatively larger.
The feature vector that is used to train and to test the machines is formed by the projections or scores of the original data into the PCA model created as described in Section 2.1.1. In general, the number of scores that have to be considered depends on the cumulative contribution of variance that it is accounted for. More precisely, the i-th score is related to the eigenvector p i , defined in Equation (10), and the eigenvalue λ i , in Equation (8); the cumulative contribution rate of variance accounting for the first σ N scores is defined as:
i = 1 σ λ i i = 1 λ i ,
where N is the number of principal components. In this sense, the cumulative contribution of the first three scores is depicted in Figure 14. In this experimental setup, we will use the first two principal components that account for more than 80% of the variance. A priori, better results should be obtained if we use as many principal components as possible. However, in some cases, as reported in [48,49], less principal components may lead to more accurate results.
In a standard application of the principal component analysis strategy in the field of structural health monitoring, the scores allow a visual grouping or separation [50]. In some other cases, as in [51], two classical indices can be used for damage detection, such as the Q index (also known as square prediction error (SPE)) and Hotelling’s T 2 index. In this case, however, it can be noticed in Figure 15, where the projection onto the two first principal components of samples coming from the pristine structure and the structure with damage, subjected to temperatures changes are plotted, that a visual grouping, clustering or separation cannot be performed. To solve this problem, several strategies have been applied in the literature. Some of these procedures are related to univariate or multivariate statistical hypothesis testing [29,48,49]. In this work, an exhaustive number of machine learning approaches is used. This way, some orientations can be presented on the most convenient schemes.
Table 1 shows the results of the damage identification obtained with the 20 different machine learning strategies. To this goal, the Classification Learner of MATLAB was used. The columns in Table 1 correspond to the percentage of correct decisions for the healthy structure and the structure with Damages 1, 2 and 3. The detailed results can be found in Figure 16 and Figure 17, where the machines with the best and worst performance have been considered, respectively. More precisely, in the subspace k-NN classifier, 162 cases have been correctly classified out of 200 cases, while in the fine k-NN classifier, this number rises up to 163 cases. Similarly, with respect to the weighted k-NN and the fine Gaussian SVM classifiers, 154 and 157 cases have been correctly classified. This represents 77–82% of correct decisions. It is worth noting in these four cases that we have considered that the structure with no damage is correctly classified in more than 90% of cases. Similarly, the structure with damage is confused with the structure with no damage in just a few cases. For instance, in the fine Gaussian SVM classifier, eight cases of the structure with damage are identified as healthy, which represents 5.3 % out of 150 cases. As stated before, Figure 17 shows the confusion matrix for the machines with the poorest performance. These are: rusboostedtrees, boosted trees, coarse k-NN and coarse Gaussian SVM. For instance, in both rusboosted trees and boosted trees, not one of the samples coming from the structure with Damage 1 is correctly classified. However, in these two cases, 49 and 48 cases of the structure with no damage have been correctly classified, out of 50 cases, which represents 98 % and 96 % , respectively.

4.2. Second Specimen: Carbon Fiber Plate

The second specimen used for the experimental validation of the approach presented in this paper is a composite plate of carbon fiber polymer with an area of 50 cm × 25 cm and a 2-mm thickness. The plate is instrumented with piezoelectric transducers. Figure 18 shows the dimensions and distribution of the six piezoelectric transducers attached to the structure, as well as the location of the three damages that are presented in the structure.
As in the previous experimental setup, to test the structure under different environmental conditions and, more precisely, under different temperatures temperatures, an incubator or climatic chamber (Faithful, Model HWS-250BX) is used to apply these variations. A picture of the composite plate inside the chamber can be found in Figure 19.
The experimental setup includes testing with six different temperatures:
  • T 1 = 0 ;
  • T 2 = 10 ;
  • T 3 = 20 ;
  • T 4 = 30 ;
  • T 5 = 40 ; and
  • T 6 = 45 .
For each one of these six temperatures, data from each structural state are captured. In this case, we have considered four different structural states:
  • no damage (healthy or pristine structure);
  • Damage 1;
  • Damage 2; and
  • Damage 3.
The effect of the temperature changes in the composite plate can be perfectly illustrated in Figure 20, where the time-history signal that is received by Sensor 2 when the first sensor is used as an actuator is depicted, for the six different temperatures. As in the previous experimental setup, from this figure, it is possible to observe that changes in the temperature imply changes in the waveforms. More precisely, variations in the phase and amplitude can be easily detected.
Finally, the first principal component versus the second principal component is plotted in Figure 21. It can be observed, again, that a visual grouping, clustering or separation cannot be performed. In this experimental setup, we will use the first three principal components that account for more than 80% of the variance, as can be seen in Figure 22.
Table 2 shows the results of the damage identification in the composite plate obtained with the 20 different machine learning strategies. The columns in Table 2 correspond to the percentage of correct decisions for the healthy structure and the structure with Damages 1, 2 and 3. The detailed results can be found in Figure 23 and Figure 24, where the machines with the best and worst performance have been considered, respectively. More precisely, in the subspace k-NN classifier, 378 cases have been correctly classified out of 480 cases, while in the bagged trees classifier, this number rises up to 382 cases. Similarly, with respect to the weighted k-NN and the cubic SVM classifiers, 336 and 368 cases have been correctly classified. This represents 70–80% of correct decisions. It is worth noting that in these four cases that we have considered, the structure where Damage 2 is present is correctly classified in more than 83% of cases. Similarly, the structure with damage is confused with the structure with no damage in just a few cases. For instance, in the bagged trees classifier, 22 cases of the structure with damage are identified as healthy, which represents 6.1 % out of 360 cases. As stated before, Figure 17 shows the confusion matrix for the machines with the poorest performance. These are: rusboosted trees, boosted trees, coarse k-NN and coarse Gaussian SVM. For instance, in rusboosted trees, not one of the samples coming from the healthy structure is correctly classified. However, in this case, 75 cases of the structure with Damage 2 have been correctly classified, out of 120 cases, which represents 75 % .

5. Concluding Remarks

In this contribution, a structural health monitoring methodology has been developed for damage detection and classification of structures that are subjected to changes in the environmental conditions. The experimental results that have been presented in this work demonstrate that changes in the temperature affect basic damage detection strategies based on principal component analysis; this is because pattern recognition approaches in SHM applications use data from a structure under a established conditions to define a pattern and small changes in the data from the structure as obtained by the variation of temperature produce differences with the pattern and false positive damage detection procedures even it is a healthy structure. In this sense, to overcome the distortion caused by these changing environmental conditions, a more complex SHM strategy has been presented, based on: (i) ultrasonic signals through a piezoelectric sensor network; (ii) principal component analysis; and (iii) pattern recognition based on machine learning approaches, which considers data from different structural states under different temperatures.
According to the experimental results on both an aluminum plate and a composite plate of carbon fiber polymer, subspace k-NN and weighted k-NN have presented the most accurate results. Besides, for the aluminum plate, fine k-NN and fine Gaussian k-NN classifiers showed a very good behavior. For the composite plate, bagged trees and cubic SVM were also quite accurate.
Among the classifiers, the ones with the poorest accuracy were rusboosted trees, boosted trees, coarse k-NN and coarse Gaussian SVM. The advantages of the developed methodology include: (i) a data-driven analysis that allows the knowledge of the current state of the structure directly from the collected data and without the use of a complex mathematical model; (ii) the reduction of false positives, since data from different temperatures are considered during the training and sensor data fusion to provide a single a more reliable result. One of the disadvantages of the methodology is the big quantity of data required to cover all of the structural states with respect to all of the temperatures. Besides, a new damage can be detected as such, but it cannot be properly classified since there is no information about this particular damage within the pattern.
Since the methodology allows detecting and classifying a damage with data collected from the structure, damage localization can be explored by understanding that a huge quantity of data of damage in different positions of the structure can be used not only for classification, but also for localization if the position of the damage is defined from the beginning in the training process. A variation of this methodology is being explored in other work where machine learning approaches are used for regression.

Acknowledgments

This work has been partially funded by the Spanish Ministry of Economy and Competitiveness through the research project DPI2014-58427-C2-1-R, and by the Catalonia Government (Generalitat de Catalunya) through the research project 2014 SGR 859. This work also is supported by Universidad Santo Tomás through Grant FODEIN 2017, Project Code FODEIN 17545020-47.

Author Contributions

J.V., D.T. and F.P. conceived and designed the experiments; J.V. and M.A. performed the experiments; J.V., D.T. and M.A. analyzed the data; J.V. and F.P. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sohn, H. Effects of environmental and operational variability on structural health monitoring. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 2007, 365, 539–560. [Google Scholar] [CrossRef] [PubMed]
  2. Anaya, M.; Tibaduiza, D.; Torres, M.; Pozo, F.; Ruiz, M.; Mujica, L.; Rodellar, J.; Fritzen, C. Data-driven methodology to detect and classify structural changes under temperature variations. Smart Mater. Struct. 2014, 23, 1–15. [Google Scholar] [CrossRef]
  3. Chakraborty, D.; Kovvali, N.; Zhang, J.J.; Papandreou-Suppappola, A.; Chattopadhyay, A. Adaptive learning for damage classification in structural health monitoring. In Proceedings of the 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–4 November 2009; pp. 1678–1682. [Google Scholar]
  4. Yan, A.M.; Kerschen, G.; De Boe, P.; Golinval, J.C. Structural damage diagnosis under varying environmental conditions—Part I: A linear analysis. Mech. Syst. Signal Process. 2005, 19, 847–864. [Google Scholar] [CrossRef]
  5. Baptista, F.G.; Budoya, D.E.; de Almeida, V.A.D.; Ulson, J.A.C. An experimental study on the effect of temperature on piezoelectric sensors for impedance-based structural health monitoring. Sensors 2014, 14, 1208–1227. [Google Scholar] [CrossRef] [PubMed]
  6. Anaya, M.; Tibaduiza, D.A.; Pozo, F. Detection and classification of structural changes using artificial immune systems and fuzzy clustering. Int. J. Bio-Inspir. Comput. 2017, 9, 35–52. [Google Scholar] [CrossRef]
  7. Torres-Arredondo, M.A.; Buethe, I.; Tibaduiza, D.A.; Rodellar, J.; Fritzen, C.P. Damage detection and classification in pipework using acousto-ultrasonics and non-linear data-driven modelling. J. Civ. Struct. Health Monit. 2013, 3, 297–306. [Google Scholar] [CrossRef]
  8. Torres-Arredondo, M.A.; Sierra-Pérez, J.; Cabanes, G. An optimal baseline selection methodology for data-driven damage detection and temperature compensation in acousto-ultrasonics. Smart Mater. Struct. 2016, 25, 055034. [Google Scholar] [CrossRef]
  9. Leichtle, T.; Geiß, C.; Wurm, M.; Lakes, T.; Taubenböck, H. Unsupervised change detection in VHR remote sensing imagery—An object-based clustering approach in a dynamic urban environment. Int. J. Appl. Earth Obs. Geoinf. 2017, 54, 15–27. [Google Scholar] [CrossRef]
  10. Anaya, M.; Tibaduiza, D.; Pozo, F. Artificial Immune System (AIS) for Damage Detection Under Variable Temperature Conditions. In Proceedings of the European Workshop on Structural Health Monitoring, Bilbao, Spain, 5–8 July 2016. [Google Scholar]
  11. Vanlanduit, S.; Parloo, E.; Cauberghe, B.; Guillaume, P.; Verboven, P. A robust singular value decomposition for damage detection under changing operating conditions and structural uncertainties. J. Sound Vib. 2005, 284, 1033–1050. [Google Scholar] [CrossRef]
  12. Deraemaeker, A.; Reynders, E.; Roeck, G.D.; Kullaa, J. Vibration-based structural health monitoring using output-only measurements under changing environment. Mech. Syst. Signal Process. 2008, 22, 34–56. [Google Scholar] [CrossRef]
  13. Balmès, É.; Basseville, M.; Bourquin, F.; Mevel, L.; Nasser, H.; Treyssede, F. Merging sensor data from multiple temperature scenarios for vibration monitoring of civil structures. Struct. Health Monit. 2008, 7, 129–142. [Google Scholar] [CrossRef]
  14. Buren, K.V.; Reilly, J.; Neal, K.; Edwards, H.; Hemez, F. Guaranteeing robustness of structural condition monitoring to environmental variability. J. Sound Vib. 2017, 386, 134–148. [Google Scholar] [CrossRef]
  15. Sohn, H.; Worden, K.; Farrar, C.R. Statistical Damage Classification Under Changing Environmental and Operational Conditions. J. Intell. Mater. Syst. Struct. 2002, 13, 561–574. [Google Scholar] [CrossRef]
  16. Figueiredo, E.; Park, G.; Farrar, C.R.; Worden, K.; Figueiras, J. Machine learning algorithms for damage detection under operational and environmental variability. Struct. Health Monit. 2011, 10, 559–572. [Google Scholar] [CrossRef]
  17. Worden, K.; Manson, G. The application of machine learning to structural health monitoring. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 2007, 365, 515–537. [Google Scholar] [CrossRef] [PubMed]
  18. Worden, K.; Manson, G.; Allman, D. Experimental validation of a structural health monitoring methodology: Part I. Novelty detection on a laboratory structure. J. Sound Vib. 2003, 259, 323–343. [Google Scholar] [CrossRef]
  19. Manson, G.; Worden, K.; Allman, D. Experimental validation of a structural health monitoring methodology: Part II. Novelty detection on a Gnat aircraft. J. Sound Vib. 2003, 259, 345–363. [Google Scholar] [CrossRef]
  20. Roy, S.; Chang, F.K.; Lee, S.J.; Pollock, P.; Janapati, V. A novel machine-learning approach for structural state identification using ultrasonic guided waves. In Safety, Reliability, Risk and Life-Cycle Performance of Structures and Infrastructures; CRC Press: Boca Raton, FL, USA, 2013; pp. 321–328. [Google Scholar]
  21. Tibaduiza, D.; Mujica, L.; Anaya, M.; Rodellar, J.; Güemes, A. Independent component analysis for detecting damages on aircraft wing skeleton. In Proceedings of the 5th European Conference on Structural Control, Genoa, Italy, 18–20 June 2012. [Google Scholar]
  22. Tibaduiza, D.; Anaya, M.; Forero, E.; Castro, R.; Pozo, F. A Sensor Fault Detection Methodology applied to Piezoelectric Active Systems in Structural Health Monitoring Applications. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2016; Volume 138, p. 012016. [Google Scholar]
  23. Vitola, J.; Pozo, F.; Tibaduiza, D.A.; Anaya, M. A Sensor Data Fusion System Based on k-Nearest Neighbor Pattern Classification for Structural Health Monitoring Applications. Sensors 2017, 17, 417. [Google Scholar] [CrossRef] [PubMed]
  24. Jolliffe, I. Principal Component Analysis; Wiley Online Library: New York, NY, USA, 2002. [Google Scholar]
  25. Anaya, M.; Tibaduiza, D.A.; Pozo, F. A bioinspired methodology based on an artificial immune system for damage detection in structural health monitoring. Shock Vib. 2015, 2015, 648097. [Google Scholar] [CrossRef]
  26. Tibaduiza, D.A. Design and Validation of a Structural Health Monitoring System for Aeronautical Structures. Ph.D. Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2012. [Google Scholar]
  27. Jeong, D.H.; Ziemkiewicz, C.; Fisher, B.; Ribarsky, W.; Chang, R. iPCA: An Interactive System for PCA-Based Visual Analytics; Computer Graphics Forum; Wiley Online Library: Oxford, UK, 2009; Volume 28, pp. 767–774. [Google Scholar]
  28. Westerhuis, J.A.; Kourti, T.; MacGregor, J.F. Comparing alternative approaches for multivariate statistical analysis of batch process data. J. Chemom. 1999, 13, 397–413. [Google Scholar] [CrossRef]
  29. Pozo, F.; Vidal, Y. Wind turbine fault detection through principal component analysis and statistical hypothesis testing. Energies 2015, 9, 3. [Google Scholar] [CrossRef]
  30. Farrar, C.R.; Worden, K. Structural Health Monitoring: A Machine Learning Perspective; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  31. Ciang, C.C.; Lee, J.R.; Bang, H.J. Structural health monitoring for a wind turbine system: A review of damage detection methods. Meas. Sci. Technol. 2008, 19, 122001. [Google Scholar] [CrossRef]
  32. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  33. Yang, J.; Sun, Z.; Chen, Y. Fault Detection Using the Clustering-kNN Rule for Gas Sensor Arrays. Sensors 2016, 16, 2069. [Google Scholar] [CrossRef] [PubMed]
  34. Dhanabal, S.; Chandramathi, S. A review of various k-nearest neighbor query processing techniques. Int. J. Comput. Appl. 2011, 31, 14–22. [Google Scholar]
  35. Johnson, J.; Yadav, A. Weak and Electromagnetic Interactions. In Proceedings of the International Conference on ICT for Sustainable Development (ICT4SD), Panaji, Goa, India, 1–2 July 2016. [Google Scholar]
  36. MathWorks. Statistics and Machine Learning Toolbox for Matlab; MathWorks: Natick, MA, USA, 2015. [Google Scholar]
  37. Deng, Z.; Zhu, X.; Cheng, D.; Zong, M.; Zhang, S. Efficient kNN classification algorithm for big data. Neurocomputing 2016, 195, 143–148. [Google Scholar] [CrossRef]
  38. Oh, S.; Byon, Y.J.; Yeo, H. Improvement of search strategy with k-nearest neighbors approach for traffic state prediction. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1146–1156. [Google Scholar] [CrossRef]
  39. Sutton, C. Chapter Classification and Regression Trees, Bagging, and Boosting. In Handbook of Statistics; Elsevier: Amsterdam, The Netherlands, 2005; pp. 11–26. [Google Scholar]
  40. Quinlan, J. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  41. Shmilovici, A. Support Vector Machines. In Data Mining and Knowledge Discovery Handbook; Maimon, O., Rokach, L., Eds.; Springer: Boston, MA, USA, 2005; pp. 257–276. [Google Scholar]
  42. Tibaduiza, D.A.; Mujica, L.E.; Rodellar, J. Comparison of several methods for damage localization using indices and contributions based on PCA. J. Phys. Conf. Ser. 2011, 305, 012013. [Google Scholar] [CrossRef]
  43. Torres-Arredondo, M.A.; Tibaduiza, D.A.; McGugan, M.; Toftegaard, H.; Borum, K.K.; Mujica, L.E.; Rodellar, J.; Fritzen, C.P. Multivariate data-driven modelling and pattern recognition for damage detection and identification for acoustic emission and acousto-ultrasonics. Smart Mater. Struct. 2013, 22, 105023. [Google Scholar] [CrossRef]
  44. Yu, L.; Leckey, C.A. Lamb wave-based quantitative crack detection using a focusing array algorithm. J. Intell. Mater. Syst. Struct. 2013, 24, 1138–1152. [Google Scholar] [CrossRef]
  45. Neto, R.M.F.; Steffen, V.; Rade, D.A.; Gallo, C.A. System for Structural Health Monitoring based on piezoelectric sensors/actuators. In Proceedings of the Power Electronics Conference (COBEP), Natal, Brazil, 11–15 September 2011; pp. 365–371. [Google Scholar]
  46. Lanza di Scalea, F.; Salamone, S. Temperature effects in ultrasonic Lamb wave structural health monitoring systems. J. Acoust. Soc. Am. 2008, 124, 161–174. [Google Scholar] [CrossRef] [PubMed]
  47. Ha, S.; Lonkar, K.; Mittal, A.; Chang, F.K. Adhesive Layer Effects on PZT-induced Lamb Waves at Elevated Temperatures. Struct. Health Monit. 2010, 9, 247–256. [Google Scholar] [CrossRef]
  48. Mujica, L.; Ruiz, M.; Pozo, F.; Rodellar, J.; Güemes, A. A structural damage detection indicator based on principal component analysis and statistical hypothesis testing. Smart Mater. Struct. 2013, 23, 025014. [Google Scholar] [CrossRef]
  49. Pozo, F.; Arruga, I.; Mujica, L.E.; Ruiz, M.; Podivilova, E. Detection of structural changes through principal component analysis and multivariate statistical inference. Struct. Health Monit. 2016, 15, 127–142. [Google Scholar] [CrossRef]
  50. Mujica, L.; Rodellar, J.; Fernandez, A.; Guemes, A. Q-statistic and T2-statistic PCA-based measures for damage assessment in structures. Struct. Health Monit. 2010, 10, 539–553. [Google Scholar] [CrossRef]
  51. Odgaard, P.F.; Lin, B.; Jorgensen, S.B. Observer and data-driven-model-based fault detection in power plant coal mills. IEEE Trans. Energy Convers. 2008, 23, 659–668. [Google Scholar] [CrossRef]
Figure 1. The three-way matrix Z can be unfolded to a two-way array in several ways.
Figure 1. The three-way matrix Z can be unfolded to a two-way array in several ways.
Sensors 17 01252 g001
Figure 2. Classification of the machine learning approaches according to the learning.
Figure 2. Classification of the machine learning approaches according to the learning.
Sensors 17 01252 g002
Figure 3. Representation of the structural health monitoring (SHM) system.
Figure 3. Representation of the structural health monitoring (SHM) system.
Sensors 17 01252 g003
Figure 4. Signal excitation.
Figure 4. Signal excitation.
Sensors 17 01252 g004
Figure 5. Data organization per each temperature.
Figure 5. Data organization per each temperature.
Sensors 17 01252 g005
Figure 6. Methodology and training machines.
Figure 6. Methodology and training machines.
Sensors 17 01252 g006
Figure 7. Methodology for prediction.
Figure 7. Methodology for prediction.
Sensors 17 01252 g007
Figure 8. Aluminum plate instrumented with four piezoelectric sensors.
Figure 8. Aluminum plate instrumented with four piezoelectric sensors.
Sensors 17 01252 g008
Figure 9. Aluminum plate inside the climate chamber (Faithful HWS-250BX).
Figure 9. Aluminum plate inside the climate chamber (Faithful HWS-250BX).
Sensors 17 01252 g009
Figure 10. The plate in the climate chamber.
Figure 10. The plate in the climate chamber.
Sensors 17 01252 g010
Figure 11. Signal that is received by Sensor 2 when the first sensor is used as an actuator.
Figure 11. Signal that is received by Sensor 2 when the first sensor is used as an actuator.
Sensors 17 01252 g011
Figure 12. Signal that is received by Sensor 3 when the first sensor is used as an actuator.
Figure 12. Signal that is received by Sensor 3 when the first sensor is used as an actuator.
Sensors 17 01252 g012
Figure 13. Signal that is received by Sensor 4 when the first sensor is used as an actuator.
Figure 13. Signal that is received by Sensor 4 when the first sensor is used as an actuator.
Sensors 17 01252 g013
Figure 14. Cumulative contribution rate of variance for the principal components.
Figure 14. Cumulative contribution rate of variance for the principal components.
Sensors 17 01252 g014
Figure 15. First principal component versus second principal component in the aluminum plate described in Section 4.1.
Figure 15. First principal component versus second principal component in the aluminum plate described in Section 4.1.
Sensors 17 01252 g015
Figure 16. Confusion matrix using: (a) subspace k-NN; (b) weighted k-NN; (c) fine k-NN; and (d) fine Gaussian SVM.
Figure 16. Confusion matrix using: (a) subspace k-NN; (b) weighted k-NN; (c) fine k-NN; and (d) fine Gaussian SVM.
Sensors 17 01252 g016
Figure 17. Confusion matrix using: (a) rusboosted trees; (b) boosted trees; (c) coarse k-NN; and (d) coarse Gaussian SVM.
Figure 17. Confusion matrix using: (a) rusboosted trees; (b) boosted trees; (c) coarse k-NN; and (d) coarse Gaussian SVM.
Sensors 17 01252 g017
Figure 18. Experimental setup for the composite plate.
Figure 18. Experimental setup for the composite plate.
Sensors 17 01252 g018
Figure 19. Composite plate in the climatic chamber.
Figure 19. Composite plate in the climatic chamber.
Sensors 17 01252 g019
Figure 20. Signal that is received by Sensor 2 when the first sensor is used as an actuator.
Figure 20. Signal that is received by Sensor 2 when the first sensor is used as an actuator.
Sensors 17 01252 g020
Figure 21. First principal component versus second principal component in the carbon fiber plate described in Section 4.2.
Figure 21. First principal component versus second principal component in the carbon fiber plate described in Section 4.2.
Sensors 17 01252 g021
Figure 22. Cumulative variance for the scores of PCA.
Figure 22. Cumulative variance for the scores of PCA.
Sensors 17 01252 g022
Figure 23. Confusion matrix machines with good behavior. (a) Subspace k-NN; (b) weighted k-NN; (c) bagged Trees; (d) cubic SVM.
Figure 23. Confusion matrix machines with good behavior. (a) Subspace k-NN; (b) weighted k-NN; (c) bagged Trees; (d) cubic SVM.
Sensors 17 01252 g023
Figure 24. Confusion matrix machines with bad behavior. (a) Rusboosted trees; (b) boosted trees; (c) coarse k-NN; (d) coarse Gaussian SVM.
Figure 24. Confusion matrix machines with bad behavior. (a) Rusboosted trees; (b) boosted trees; (c) coarse k-NN; (d) coarse Gaussian SVM.
Sensors 17 01252 g024
Table 1. Percentage of correct decisions for the healthy structure and the structure with Damage 1, 2 and 3, for the twenty different machine learning strategies (aluminum plate).
Table 1. Percentage of correct decisions for the healthy structure and the structure with Damage 1, 2 and 3, for the twenty different machine learning strategies (aluminum plate).
Machine NameHealthyDamage 1Damage 2Damage 3
Medium Tree66%76%70%56%
Simple Tree64%60%30%58%
Complex Tree72%76%58%56%
Linear SMV70%60%26%60%
Quadratic SVM78%70%56%70%
Cubic SVM86%68%66%72%
Fine Gaussian SVM90%80%66%78%
Medium Gaussian SVM76%80%56%74%
Coarse Gaussian SVM94%64%14%38%
Fine k-NN94%78%74%80%
Medium k-NN80%62%64%74%
Coarse k-NN94%42%2%24%
Cosine k-NN84%58%78%72%
Cubic k-NN80%64%62%76%
Weighted k-NN94%66%68%80%
Boosted Trees96%0%42%42%
Bagged Trees84%70%66%78%
Subspace Discriminant56%44%32%46%
Subspace k-NN94%78%72%80%
Rusboosted Trees98%0%42%0%
Table 2. Percentage of correct decisions for the healthy structure and the structure with Damages 1, 2 and 3, for the twenty different machine learning strategies (composite plate).
Table 2. Percentage of correct decisions for the healthy structure and the structure with Damages 1, 2 and 3, for the twenty different machine learning strategies (composite plate).
Machine NameHealthyDamage 1Damage 2Damage 3
Medium Tree55.00%63.33%60.83%52.50%
Simple Tree40.00%60.00%63.33%42.50%
Complex Tree57.50%64.17%75.83%65.83%
Linear SVM41.67%59.17%45.00%47.50%
Quadratic SVM65.83%73.33%85.00%75.50%
Cubic SVM70.83%75.00%86.67%74.17%
Fine Gaussian SVM59.17%64.17%83.33%78.33%
Medium Gaussian SVM55.83%60.00%82.50%63.33%
Coarse Gaussian SVM52.50%10.83%33.33%56.67%
Fine k-NN63.33%61.67%80.00%70.00%
Medium k-NN65.00%46.67%75.00%63.33%
Coarse k-NN52.50%37.50%60.83%35.83%
Cosine k-NN65.00%43.33%79.17%60.83%
Cubic k-NN59.17%47.50%72.50%60.00%
Weighted k-NN61.67%58.33%83.33%74.17%
Boosted Trees16.67%62.50%60.83%71.67%
Bagged Trees71.67%72.50%90.00%84.17%
Subspace Discriminant33.33%45.83%45.00%55.83%
Subspace k-NN70.83%72.50%89.17%82.50%
Rusboosted Trees0.00%62.50%0.00%93.33%

Share and Cite

MDPI and ACS Style

Vitola, J.; Pozo, F.; Tibaduiza, D.A.; Anaya, M. Distributed Piezoelectric Sensor System for Damage Identification in Structures Subjected to Temperature Changes. Sensors 2017, 17, 1252. https://doi.org/10.3390/s17061252

AMA Style

Vitola J, Pozo F, Tibaduiza DA, Anaya M. Distributed Piezoelectric Sensor System for Damage Identification in Structures Subjected to Temperature Changes. Sensors. 2017; 17(6):1252. https://doi.org/10.3390/s17061252

Chicago/Turabian Style

Vitola, Jaime, Francesc Pozo, Diego A. Tibaduiza, and Maribel Anaya. 2017. "Distributed Piezoelectric Sensor System for Damage Identification in Structures Subjected to Temperature Changes" Sensors 17, no. 6: 1252. https://doi.org/10.3390/s17061252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop