Abstract

The neural network has the advantages of self-learning, self-adaptation, and fault tolerance. It can establish a qualitative and quantitative evaluation model which is closer to human thought patterns. However, the structure and the convergence rate of the radial basis function (RBF) neural network need to be improved. This paper proposes a new variable structure radial basis function (VS-RBF) with a fast learning rate, in order to solve the problem of structural optimization design and parameter learning algorithm for the radial basis function neural network. The number of neurons in the hidden layer is adjusted by calculating the output information of neurons in the hidden layer and the multi-information between neurons in the hidden layer and output layer. This method effectively solves the problem that the RBF neural network structure is too large or too small. The convergence rate of the RBF neural network is improved by using the robust regression algorithm and the fast learning rate algorithm. At the same time, the convergence analysis of the VS-RBF neural network is given to ensure the stability of the RBF neural network. Compared with other self-organizing RBF neural networks (self-organizing RBF (SORBF) and rough RBF neural networks (RS-RBF)), VS-RBF has a more compact structure, faster dynamic response speed, and better generalization ability. The simulations of approximating a typical nonlinear function, identifying UCI datasets, and evaluating sortie generation capacity of an carrier aircraft show the effectiveness of VS-RBF.

1. Introduction

A carrier aircraft is an important part in the modern naval warfare. The research on the warfare capacity of the carrier aircraft has become a hot issue with the increasing attention of the security in the territorial sea. The comparison of sortie generation capacity of the carrier aircraft in different operational schemes is helpful to determine the final plan. Therefore, the evaluation for sortie generation capacity of the carrier aircraft has important theoretical significance and application value [1].

The evaluation for sortie generation capacity of the carrier aircraft is complex, due to the mutual influence and complex nonlinear of factors. The research of evaluation for the sortie generation capacity of the carrier aircraft has been studied recently. Xia et al. [2, 3] applied the principal component reduction method and the nonlinear fuzzy matter-element method to evaluate sortie generation capacity of the carrier aircraft. Both methods did not consider the mutual influence of factors. Gilchrist [4] proposed an evaluation method of the suitability of LCOM for modeling. This report studied the base-level munition production process in LCOM. However, the common evaluation methods ignore the correlation between the influencing factors. There is a certain deviation between the evaluation results and the actual situation.

The neural network has the advantages of self-learning, self-adaptation, and fault tolerance. It can establish a qualitative and quantitative evaluation model which is closer to human thought patterns. The trained neural network can connect expert evaluation ideas with the neural network. Thus, this neural network can not only simulate the expert evaluation but also avoid the human errors in the evaluation process and the subjective influence of the human to calculate weight. The evaluation method based on the RBF neural network has advantages of fast calculation speed, high efficiency of problem solving, and strong self-learning ability. However, there are two problems in the application of the radial basis function (RBF) neural network.

The first problem is the structural design problem of the RBF neural network. In recent years, many optimization methods of the RBF neural network have been put forward. (1)The pruning algorithm [5] was regarded as an effective way to optimize the network structure and improve the generalization ability of the network, but the parameters set in the pruning method required experience and skills(2)The growing algorithm [6] increased the number of neurons and connections until the generalization ability met the requirements, but it was difficult to determine when to stop growing(3)The pruning and growing algorithm: Kokkinos and Margaritis [7] present a Hierarchical Markovian Radial Basis Function Neural Network (HiMarkovRBFNN) model that enabled recursive operations. The hierarchical structure of this network was composed of recursively nested RBF neural networks with arbitrary levels of hierarchy. All hidden neurons in the hierarchy levels were composed of truly RBF neural networks with two weight matrices. The hidden RBF response units were recursive. However, this method was affected by the initial value, and sometimes, the final RBF neural network was unstable. However, the algorithm ignored the adjustment of structural parameters, which led to the slow convergence speed of the neural network learning algorithm. Therefore, the RBF neural network structure optimization design method is still an open problem, and especially the convergence of the dynamic structure adjustment process has not been solved well

The second problem is the training method for the weights of the RBF neural network and the learning rate. At present, RBF neural network weights are usually trained by the linear least square algorithm, but the least square estimation is affected by outliers [8, 9]. Kadalbajoo et al. [10] presented an RBF-based implicit explicit numerical method to solve the partial integro-differential equation which described the nature of the option price under the jump diffusion model. But the sum of squared errors increased rapidly with the increase of squared error of each training sample. At the same time, there is a problem to set the learning rate of the RBF neural network [1114]. In the use of the RBF neural network, the learning rate was often subjective to set as a fixed value [1520]. It remained unchanged throughout the learning process. If the learning rate was set too high, the convergence speed of the network might be very fast, and it might cause network instability. If the learning rate was too small, it would cause that the network convergence speed was slow and consume a large amount of computing time. Therefore, it is very difficult to choose a suitable learning rate for the traditional RBF neural network.

In order to solve the problems above, this paper proposes a variable structure RBF neural network (VS-RBF) with a fast learning rate. The number of neurons in the hidden layer is adjusted by calculating the output information (OI) of neurons in the hidden layer and the multi-information (MI) between neurons in the hidden layer and output layer. The robust regression method is used to replace the linear least square algorithm to reduce the influence of outliers on weight training. Then, the fast learning rate method is used to adjust the learning rate of the RBF neural network, which can guarantee the stable learning of the network and the convergence speed. In this paper, the proposed VS-RBF neural network can be used to grow or prune the neurons in the hidden layer according to the actual system.

The rest of this paper is organized as follows. Section 2 describes the VS-RBF neural network with a fast learning rate. Section 3 proves convergence of VS-RBF. In Section 4, VS-RBF is compared with SORBF and RS-RBF in approximating a typical nonlinear function and identifying UCI datasets. In Section 5, VS-RBF is used to evaluate sortie generation capacity of the carrier aircraft. Finally, conclusions are presented in Section 6.

2. Sortie Generation Capacity of Carrier Aircrafts

At present, the use of carrier aircrafts for domestic experience is very little. In theory, the index system of sortie generation capacity is established according to the foreign research results. The index of the sortie generation capability system established in this paper is based on the main factors pointed out in the 1997 “Nimitz” carrier aircraft exercise report. The factors are taken from the real environment and have high reference value. During the four-day exercise period, the equipment and environment did not affect the sorties. So the selection of indexes did not include the corresponding equipment and environment indexes [2, 3].

In this paper, when the index system of sortie generation capability is established, on one hand, we hope that the factors will be more comprehensive. Thus the credibility of the evaluation results will be increased. On the other hand, considering that if all the possible factors are added to the index system, the modeling difficulty will be greatly increased, and the evaluation system will also be proofread. So we should select the evaluation index according to the following principles: (1)Considering the hierarchy and correlation between the evaluation indexes, if the high-level indexes can be obtained, the underlying indexes are not considered repeatedly. For example, one of the lowest levels is the carrier deck design, but the impact of the carrier deck design on the sortie generation capability is derived from the redeparture preparation time, ejection interval time, and recovery interval time. So the corresponding high-level indexes will not be included in the evaluation system(2)Grasping the main indexes makes the evaluation system easy to understand and operate. Environmental factors have little impact on most operations, and they are ignored(3)The main purpose of this paper is to carry out quantitative evaluation. The indexes which are difficult to quantify, such as the quality of the environment and the ability of personnel, are not used to establish the evaluation system in this paper

In summary, the index system for sortie generation capacity of carrier aircrafts is established with related research results [2]. A three-level index system with complexity, hierarchy, contradiction, and relevance is established by the recursive hierarchy method. The index system for sortie generation capacity of carrier aircrafts is shown in Figure 1.

These indexes are defined as follow: (1)Emergency sortie generation rate (ESGR): the maximum number of ready aircrafts taking off in a few minutes(2)Surge sortie generation rate (SSGR): the average number of aircrafts per day in the surge operation (4 days)(3)Last sortie generation rate (LSGR): the average number of aircrafts per day in the continuous operation (30 days)(4)Performing tasks proportion (PTP): the time proportion that the aircrafts can carry out one task at least under a certain flight plan and logistics condition(5)Missing tasks proportion waiting for parts (MTPWP): the proportion of aircrafts missing the tasks due to waiting for parts(6)Missing tasks proportion waiting for repair (MTPWR): the proportion of aircrafts missing the tasks due to waiting for repair(7)Scheduled completion proportion (SCP): the proportion of the completed number in the planned number of aircrafts(8)Pilot utilization rate (PUR): the average utilization rate of the pilots per day(9)Plan implementation probability per aircraft (PIPA): the plan implementation probability per aircraft under the certain constraints in a given period of time(10)Sortie generation rate per aircraft (SGRA): the sortie generation rate per aircraft under the certain constraints(11)Preparation time for next sortie (PTNS): the preparation time for next sortie under the condition of a certain resource allocation(12)Ejection interval (EI): the average time for ejecting a single aircraft per catapult(13)Take-off outage proportion (TOOP): the proportion of the cancelled number in the ready number of aircrafts(14)Recovery interval (RI): the average time for recovering a single aircraft(15)Overshoot proportion (OP): the proportion of the number of aircrafts failed to recover in the number of aircrafts ready to recover

In the three-level recursive hierarchical graph of Figure 1, there are interactions between the underlying indexes: (1)In practice, the surge sortie generation rate and last sortie generation rate are contradictory, and they cannot reach the optimal value at the same time(2)The preparation time for next sortie and the ejection interval constitute one wave duration. If the preparation time is sufficient, the time spent on ejection and recovery will be reduced. And the sortie generation capability will be reduced. If the time spent on ejection and recovery is sufficient, the preparation time may not be able to meet the carrier aircraft support operations, resulting in the reduction of the available carrier aircraft and the reduction of sortie generation capability

Therefore, there are correlations and contradictions among the indexes of sortie generation capability. The evaluation of sortie generation capability shows complex nonlinear relationship. Using the nonlinear mapping ability of the neural network to evaluate the complex nonlinear sortie generation capability can avoid the subjectivity of traditional evaluation methods and the complexity of the evaluation process.

Some references used different methods to evaluate sortie generation capacity of carrier aircrafts. Xie et al. [21] investigated the complicated relation between the sortie generation and aviation maintenance of the Nimitz class carrier aircraft. It was observed that the state transition diagram was both effective and efficient for system analysis. Liu et al. [22] proposed that sortie generation was one of the critical indexes which were used to characterize carrier and air wing capabilities. In order to research the index system of sortie generation capacity and their effects on the embarked air wings, attention was drawn to the analysis on the basic concept of sortie generation rate (SGR) and a range of constraints for launch and recovery, followed by a review of the definition of the SGR index system that commonly applies in the assessment of various types of carrier aircrafts commissioned in the foreign navies (e.g., the USN, the RN, the French Navy, and the Russian Navy). Analyses were also performed on the SGR index system used for these typical carriers. The conclusion was that the establishment of an effective index system and setting for the relevant indexes must be accomplished by actual operations and combat exercises. Zhou et al. [23] considered that operational capability of the flight deck was the critical factor to affect the sortie generation of a carrier-based aircraft, including launch operation, recovery and repot operations, and serving. The definition of an optimized flight deck operation plan was given. A method to calculate the number of sortie generation in the optimized flight deck operation plan was proposed. Some factors including aircraft number, launch time, repotted time, recovery time, and how they affect the sortie generation were analysed. The results showed that promoting the capability of the flight deck was the key factor to increase the number of sorties. Wang and Yan [24] summarized three evaluation methods of SGR of embarked aircrafts according to the relevant exercise data and papers released by the US Navy. And the characteristics of them were analysed. The advantages of statistical analysis were that the accuracy and credibility of the data were high. The disadvantage was that it can only evaluate the carrier aircraft in service. The cost was high, and the cycle was long. The empirical formula method was based on the recovery data of different carrier aircrafts at different times. It could quickly predict the capacity of the carrier aircraft under typical combat tasks. But there must be a large number of the actual operational data of the carrier aircraft. And the errors of evaluation results were large. The experimental method is based on the operation process of the carrier aircraft. And the computer simulation method was used. It was characterized by wide applicability. The evaluation was of high accuracy, less cost, and short cycle. Zhang et al. [25] established a system with models based on parameters to analyse effectiveness about major factors of the swarming aircraft. The AHP method was used to calculate the weight of each parameter. The fuzzy synthetic evaluation method was applied to access the operational performance. The result demonstrated that the method was feasible to deliver a scientific evaluation and presented a new perspective to judge the efficiency of swarming aircrafts. References [2628] used the AHP method to evaluate antiship combat capability of the carrier aircraft, the threat the of carrier-borne aircraft, and effectiveness of the carrier-based aircraft in air defence.

3. VS-RBF Neural Network with the Fast Learning Rate

The robust regression method is used to replace the linear least square algorithm to reduce the influence of outliers on weight training. Then, the fast learning rate method is used to adjust the learning rate of RBF neural network, which can guarantee the stable learning of the network and the convergence speed. Finally, the proposed VS-RBF has better nonlinear approximation ability, faster training speed, and a more compact network structure compared with SORBF and RS-RBF.

3.1. Structure of the RBF Neural Network

The structure of the RBF neural network is similar to that of the multilayer forward network. There are three layers: input layer, hidden layer, and output layer. The topology of the RBF neural network is shown in Figure 2.

Therefore, the output can be described as (1) for multi-input and single-output RBF neural networks, where is the input vector; is the node number in the input layer; is the weight of the output layer; is the neuron number in the hidden layer; is radial basis function in the hidden layer, which is the Gauss function, namely, ; is the expansion constant of radial basis function; is the Euclid norm; is the data center of hidden nodes; and is the output of the RBF network. . In Figure 2, the output information (OI) is , and the multi-information (MI) is the intensity of information between and .

In the human cerebral cortex, the local accommodation and overlapping receptive fields are the characteristics of human brain response. has a strong response in a part of the surrounding area, which reflects the characteristics of the cerebral cortex response. In this paper, the OI intensity of neurons in the hidden layer and the MI intensity between neurons are used to adjust the neurons in the hidden layer. Thus, the topology structure of the neural network is modified.

There are many methods to change the RBF structure. Zheng et al. [29] proposed a meshfree or meshless local RBF collocation method to calculate the band structures of two-dimensional antiplane transverse elastic waves in phononic crystals. Three new techniques were developed for calculating the normal derivative of the field quantity required by the treatment of the boundary conditions, which improved the stability of the local RBF collocation method significantly. The pruning was at the end of the training rather than in the learning process. Therefore, it would be limited in application. Sarimveis et al. [30] presented a new method for extracting the valuable process information from input/output data. The proposed methodology produced dynamical RBF neural network models based on a specially designed genetic algorithm (GA), which was used to autoconfigure the structure of the networks and obtain the model parameters. But this algorithm was a global optimization algorithm, which took longer time in the training process. Han et al. [31] presented a flexible structure RBF neural network (FS-RBFNN). The FS-RBFNN could vary its structure dynamically in order to maintain the prediction accuracy. The hidden neurons in the RBF neural network could be added or removed online based on the neuron activity and mutual information, to achieve the appropriate network complexity and maintain overall computational efficiency. However, the algorithm ignored the implicit relationship between neurons, which might cause an overfitting phenomenon. Wu et al. [32] adopted the cloud RBF neural network as the function approximation structure of approximate dynamic programming, and it had the advantage of the fuzziness and randomness of the cloud model. But the cloud method was a global search algorithm, which would reduce the overall learning speed. Fu and Wang [33] proposed a novel separability-correlation measure (SCM) document to rank the importance of attributes. According to the attribute ranking results, different attribute subsets were used as inputs to a classifier, such as an RBF neural network. Those attributes that increase the validation error were deemed irrelevant and were deleted. The complexity of the classifier could thus be reduced, and its classification performance improved. But the initial values were set with the global sample data, which were hard to obtain in practical applications. Peng et al. [34] proposed a novel hybrid forward algorithm (HFA) for the construction of RBF networks with tunable nodes. The set neural main objective was to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. It was achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. However, the parameters of this method were too complex. Han et al. [35] proposed a new growing and pruning algorithm for RBF neural network structure design, which was named as self-organizing RBF (SORBF). The growing and pruning algorithm was used to design the structure of the RBF neural network automatically. But the overall training time of SORBF was too long. Ding et al. [36] combined rough set theory with neural network (RS-RBF). The model overcame the shortcoming that when neural network inputs too many dimensions, the structure of the network was too big. In order to solve the problems above, this paper proposes a variable structure RBF neural network (VS-RBF) with a fast learning rate. The number of neurons in the hidden layer is adjusted by calculating the output information (OI) of neurons in the hidden layer and the multi-information (MI) between neurons in the hidden layer and output layer. The convergence of the final network in the structural adjustment process is proved. In this paper, the proposed VS-RBF neural network can be used to grow or prune the neurons in the hidden layer according to the actual system.

3.2. VS-RBF Neural Network

VS-RBF is used to adjust its structure based on information intensity. Firstly, the activity of the neurons in the hidden layer is determined by OI intensity of neurons in the hidden layer. And the neurons with strong activity are decomposed. Secondly, the connection strength between the neurons in the hidden layer and in output layers is analyzed by calculating MI intensity between the neurons. Then the network structure is modified according to MI intensity. Finally, the parameters of the neural network are adjusted. Therefore, the structure of the RBF neural network can be divided into two parts: the decomposition of the neurons in the hidden layer and the interconnection adjustment between the neurons in the hidden layer and in the output layer.

3.2.1. Decomposition of Neurons in the Hidden Layer

The activity of the neuron denotes the information capacity of the neuron. When the neuron provides higher activities, it means that the neuron includes more information. In order to increase the accuracy of RBF, the information should be evenly distributed in neurons. Thus, if the neuron provides higher activities, it should be decomposed to higher number of neurons.

The activity of neurons in the hidden layer is calculated by where , is the activity of the neuron in the hidden layer, is the number of neurons in the hidden layer, is the output of the neuron in the hidden layer, is the output of the neuron in the hidden layer, and is a small real number. The activity of neurons in the hidden layer is inversely proportional to the Euclid distance between and . The closer the distance is, the higher the activity value is. The activity of neurons is obtained from the output of the hidden layer, which denotes the need of decomposition.

When activity of the neuron in the hidden layer is strong, the neuron is decomposed, and the neuron is decomposed to new neurons. The main idea behind this decomposition is to evenly distribute information in neurons in order to increase the accuracy of VS-RBF. When the activity of the neuron in the hidden layer is greater than the threshold of activity ( is the expected error), the connection between the neuron and the output neuron is broken down as shown in Figure 3.

In Figure 3, if the activity between the neuron and the output neuron is greater than the threshold of activity , the connection between the neuron and the output neuron is broken down. After decomposition, the neuron is decomposed to new neurons, and each of new neurons connects to the output neuron . .

After the connection between the neuron and the output neuron is broken down, there will be new neurons connecting to the output neuron. The initial center and variance of the new neurons are and , which can be obtained from where , , and and are the center and variance of the neuron in the hidden layer, respectively, and and are the center and variance of the neuron decomposed from the neuron. . is the number of the new neurons. is the integer part of . .

The weight between the new neuron and the output neuron is where is the decomposition parameter of the neuron; , ; is the output of the new neuron; and is the output error of the neural network before decomposition.

3.2.2. Interconnection Adjustment between the Neurons in the Hidden Layer and in the Output Layer

The MI function of neuron in the hidden layer and neuron in the output layer is obtained from (6). depends on the intensity of information between and , where is the joint distribution density function of and . , where and are the probability densities of and , respectively. According to the Shannon entropy theory, , , and are not calculated in fact, and the entropies are calculated instead.

Assuming that and are interconnected neurons, depends on the intensity of information between and .

According to the Shannon entropy theory, the connection strength between and can be calculated by where is the entropy of and is the entropy of in the condition of . When and are independent, . Otherwise, . Therefore, . The range of is shown in

Then, based on (8), the normalized MI is obtained from where . The calculation of is able to determine the interaction strength between and .

In the RBF neural network, when is large, the interaction strength between and is strong and the connection of and exists. When approaches 0, it indicates that the interaction strength between and is weak. Then the connection of and can be deleted in the structure adjustment. It can reduce the redundancy of the neural network.

In Figure 4, when (), the connection between and is disconnected. In the hidden layer, the nearest neuron with is found. The new neuron contains neurons and . Thus, a new layer is not added to the network, and there is no weight between neurons and . The parameters of neuron are adjusted from (10)–(12). is the same as . is the same as : where , , and are the center, variance, and weight between and in the new output layer, respectively.

VS-RBF can not only increase the number of neurons in the hidden layer but also remove redundant neurons. The optimal number of RBF neurons is obtained by adjusting the output information of neurons in the hidden layer and the multi-information between neurons in the hidden layer and output layer. When the error of output meets the demand, the optimal number of RBF neurons is obtained.

3.3. Robust Regression Training Method for Weights in the Output Layer

At present, RBF neural network weights are usually trained by the linear least square algorithm, but the least square estimation is affected by outliers. In order to reduce the influence of outliers, the robust regression method is applied to train weights in the output layer.

The output of the RBF network is the weighted sum of the output of the hidden layer. The aim of the training is to make the sum of squared error between the output of the whole network and the actual output least. The sum of squared error is obtained from where is the value of actual output; is the output sample vector of the hidden layer; is the weight vector; is the number of samples; and .

In order to reduce the influence of outliers, we hope to find a function , which increases with the increase of and the growth rate is slower than . Then the sum of squared error can be expressed as

is an Andrews function, which can be expressed as (15). And the derivative of can be expressed as where is the derivative of and is a threshold constant.

Therefore, robust regression can be written in recursive form, which is expressed in where is the estimated weight and is the training coefficient, which can be determined according to the actual situation. Then is transferred to

It can reduce the influence of outliers on neural network training by using the robust regression method.

3.4. Fast Learning Rate

In the use of the RBF neural network, the learning rate was often subjective to set as a fixed value. It remained unchanged throughout the learning process. If the learning rate was set too high, the convergence speed of the network might be very fast and it might cause network instability. If the learning rate was too small, it would cause that the network convergence speed was slow and consume a large amount of computing time. Therefore, it is very difficult to choose a suitable learning rate for the traditional RBF neural network. In order to solve this problem, this paper proposes a new fast learning rate. The fast learning rate is suitable for each step of the iteration, which can ensure the stability of the network. At the same time, the convergence speed of the network and the efficiency of the network can be improved.

Set

is the number of samples. is the number of nodes in the hidden layer. is the output calculated by the network. Then the cost function of the train can be defined in

The output error is . is the change of the weight in the train. According to (18), is obtained in

The increase of error can be expressed as

is the change of the actual output. is the change of the network output. In general, the absolute value of the change of the actual output is far less than the absolute value of the change of the network output. . The change of the actual output is negligible compared with the change of the network output, because the actual output is often constrained by many conditions, and the network output will not be limited. This assumption has practical significance. Thus, (22) can be approximated as

Considering (21), the change of error is expressed in

Then, is expressed in (25) considering (24):

The cost function of the train can be obtained from (25). is expressed in

This cost function can be considered as a function of the learning rate . The optimal value of learning rate can be obtained by minimizing . The first order condition of (26) is expressed in

The second order condition of (26) is expressed in

The second order condition (28) holds as is positive definite. And the fast learning rate can be obtained by (27). is expressed in

3.5. Learning Algorithm of VS-RBF with the Fast Learning Rate

VS-RBF with the fast learning rate can not only increase the number of neurons in the hidden layer but also remove redundant neurons. At the same time, the use of the robust regression algorithm can avoid the influence of the abnormal sample on the neural network. And the fast learning rate can ensure that the learning rate is appropriate in the iterative process. In this paper, the steps of the learning algorithm are shown in Figure 5. Step 1.For a given RBF neural network, the number of neurons in the hidden layer is a small natural number. Considering that VS-RBF can adjust the number of neurons in the hidden layer, the number of initial neurons in the hidden layer is defined as the neurons in the input layer or a little more. In this paper, we consider that the number of initial neurons in the hidden layer is equal to the neurons in the input layer. The neural network center and variance are selected by using the least square method. is calculated. The learning accuracy and the maximum number of iterations are set. The number of neurons in the input layer is the number of input. The number of neurons in the output layer is the number of output. , Step 2.The initial weights between the hidden layer and the output layer are set. The network output is calculated. And the root mean square error between the actual output and the network output is calculatedStep 3.If or the number of iterations reaches , go to Step 7. Otherwise, the fast learning rate is calculated by (29). The weights are updated by Step 4.The activity of the neuron in the hidden layer is calculated. If is greater than the threshold , the neuron is decomposed, and the network structure is adjusted. The initial parameters of the new neuron are set according to (3) and (4)Step 5.The connection strength between neuron in the hidden layer and neuron in the output layer is calculated. When , the connection of and is disconnected. In the hidden layer, the nearest neuron with is found. The parameters of neuron are adjusted as (10)–(12)Step 6.The network output value is updated. The root mean square error between the actual output and the network output is updated. Return to Step 3Step 7.The root mean square error and the network output are calculated

VS-RBF with fast learning rate can realize the self-organization of the structure and determine whether to grow or prune the neurons in the hidden layer by calculating the information intensity. A new RBF neural network structure design method is proposed. It not only can adjust the weights of neural network online but also can grow or prune the neurons in the hidden layer. From the biological point of view, this neural network structure is more similar to the mechanism of human brain neuron information processing.

3.6. Comparison of VS-RBF, SORBF, and RS-RBF

The advantages, main novelties, and disadvantages of SORBF [35] and RS-RBF [36] are as follows: (1)SORBF: neither the number of nodes in the hidden layer nor the parameters need to be predefined and fixed. They are adjusted automatically in the learning process by comparing and . This type of SORBF-based approach offers a promisingly inexpensive approach to real-time measurement of variables that have typically proved difficult to measure reliably using hardware. However, this algorithm ignores the adjustment of structural parameters, which leads to the slow convergence speed of the neural network learning algorithm(2)RS-RBF: by processing multiple nodes of the network at one time, multiple hidden nodes can be cut off, and the core nodes in the hidden nodes can be found by calculating the output of the network as . Adaptive principle is introduced to make the segmentation change with pruning. The memory function is to remember the most important nodes for each pruning, and the most important nodes will be not deleted in the subsequent pruning even if the output of these nodes is small. However, this method was affected by the initial value, and sometimes, the final RBF neural network was unstable

4. Proof of Convergence

The convergence of VS-RBF with the fast learning rate affects the performance of the final network.

4.1. Fixed Neural Network Structure

When the neural network structure is fixed, it can be concluded that VS-RBF with the fast learning rate can guarantee the convergence of the final neural network referring to [31].

Theorem 1. According to the changed neural network structure of VS-RBF, the error after decomposition is 0, and the error after interconnection adjustment is equal to the error of the fixed neural network structure.

4.2. Changed Neural Network Structure

At the moment , VS-RBF with the fast learning rate has neurons in the hidden layer. Current error is .

4.2.1. Decomposition of the Neuron in the Hidden Layer

When the neuron in the hidden layer is decomposed, the number of new neurons is after decomposition. Then the number of neurons in the hidden layer becomes after decomposition. The error after decomposition is expressed in where is expected output at the moment and is a sample at the moment . It can be obtained from (3) and (4).

In (30), two terms in the middle can be expressed in Eq. (31). , where is the input vector.

According to (30) and (31), the error of the neural network after decomposition is expressed in

After adjusting the structure, the error of network output at the time is zero. The convergence of average error is speeded up (, is the number of samples). It reflects the decomposition of neurons can improve neural network learning efficiency.

4.2.2. Interconnection Adjustment between the Neurons in the Hidden Layer and in the Output Layer

At the moment , the connections between the neuron in the hidden layer and neuron in the output layer need to be disconnected. And the nearest neuron with is . Then error of network output will become (33) after disconnection,

According to (10)–(12), (33) can be rewritten as

Therefore, the disconnection of the neurons in the hidden layer and the neurons in output layers cannot affect the error of network output. Thus, the optimization process of the RBF structure does not affect the convergence of the neural network.

To sum up, VS-RBF with the fast learning rate can guarantee the convergence of the final network. At the same time, the algorithm is simple. The neural network can not only realize the structure and parameter adjustment but also consider the weight changes in the structure of the optimization process.

5. Experimental Validation

VS-RBF with the fast learning rate can adjust the number of neurons in the hidden layer on the basis of the complexity of the object and improve the performance of the RBF neural network. Compared with other self-organizing RBF neural networks (SORBF and RS-RBF), VS-RBF has a more compact structure, faster dynamic response speed, and better generalization ability in approximating a typical nonlinear function, identifying UCI datasets, and evaluating sortie generation capacity of the carrier aircraft.

5.1. Nonlinear Function Approximation

The nonlinear function SIF is selected as where and . The nonlinear function SIF is commonly used to test the performance of the neural network [37].

Data points are randomly sampled adding white Gaussian noise with a standard deviation of 0.01 to produce training and validation datasets, each containing 300 samples. Thus, 600 groups of samples are selected. 300 groups are used for training and the other 300 groups are used to test. The simulation environment is a computer with Intel Core i3–4160 CPU, 4.00 GB RAM and 64 bit operation system. The software is Matlab R2016b.

The main steps on the identification of VS-RBF are as follows: Step 1.The structure of the neural network is 2-3-1. The initial number of neurons in the hidden layer is set as 3. The neural network center is selected as [−2, 0, −2; −2, 0, −2]. The variance is 3. The initial function width is 1. The learning accuracy is 0.01. The maximum number of iterations is set as 10,000Step 2.The initial weights between the hidden layer and the output layer are set as 1. The network output is calculated. And the root mean square error between the actual output and the network output is calculatedStep 3.If or the number of iterations reaches , go to Step 7. Otherwise, the fast learning rate is calculated by (29). The weights are updated by Step 4.The activity of the neuron in the hidden layer are calculated. If is greater than the threshold , the neuron is decomposed, and the network structure is adjusted. The initial parameters of the new neuron are set according to (3) and (4)Step 5.The connection strength between neuron in the hidden layer and neuron in the output layer is calculated. When , the connection of and is disconnected. In the hidden layer, the nearest neuron with is found. The parameters of neuron are adjusted as (10)–(12)Step 6.The network output value is updated. The root mean square error between the actual output and the network output is updated. Return to Step 3Step 7.The root mean square error and the network output are calculated

Under this condition, the comparison of performances of VS-RBF, SORBF [35], and RS-RBF [36] is shown in Table 1. The remaining neurons in the hidden layer in the SIF approximation are shown in Figure 6. The approximation effect of SIF is shown in Figure 7. The error surface is shown in Figure 8.

Figure 6 shows the changes of the number of remaining neurons in the training process. We can find that the structure adjustment of VS-RBF is stable, and the structure is most compact. A neuron of VS-RBF can be decomposed into several at one time. VS-RBF structure adjustment is quicker. The information processing ability of the RBF neural network is improved.

Figure 7 shows that VS-RBF can well approximate the nonlinear function SIF after training. And the output value of VS-RBF coincides with the actual value.

Figure 8 shows the error surface of VS-RBF in approximation. The test error is less than 0.015. Table 1 gives the comparison of VS-RBF, SORBF, and RS-RBF. Under the same initial conditions, the training times of SORBF and RS-RBF are more than that of VS-RBF. The structures of SORBF and RS-RBF after training are more complex than that of VS-RBF.

In addition, when using the trained neural network for function approximation, the test errors of SORBF and RS-RBF are larger than that of VS-RBF. The VS-RBF neural network has faster training speed, more compact network structure, and stronger nonlinear function approximation ability.

5.2. UCI Datasets

In order to show the effectiveness of VS-RBF, the identification effect is justified on UCI datasets. Istanbul Stock Exchange Dataset is selected in UCI datasets. Data is collected from http://imkb.gov.tr and http://finance.yahoo.com. Data is organized with regard to working days in Istanbul Stock Exchange. The selected datasets include returns of Istanbul Stock Exchange (ISE) with seven other international indexes: Standard & Poor’s 500 return index (SP), stock market return index of Germany (DAX), stock market return index of UK (FTSE), stock market return index of Japan (NIK), stock market return index of Brazil (BVSP), MSCI European Index (EU), and MSCI Emerging Markets Index (EM) from June 5, 2009, to February 22, 2011. There are 536 groups in this dataset. The first 436 groups are used to train the network, and the last 100 groups are used to test the network. The input number is 7, and the output (ISE) number is 1.

The main steps on the identification of VS-RBF on UCI datasets are as follows: Step 1.The structure of the neural network is 7-10-1. The initial number of neurons in the hidden layer is set as 10. The neural network center is selected as [−3, −3, −3, −3, −3, −3, −3, −3, −3, −3; −2, −2, −2, −2, −2, −2, −2, −2, −2, −2; −1, −1, −1, −1, −1, −1, −1, −1, −1, −1; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 1, 1, 1, 1, 1, 1, 1, 1, 1, 1; 2, 2, 2, 2, 2, 2, 2, 2, 2, 2; 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,]. The variance is 2. The initial function width is 2. The learning accuracy is 0.01. The maximum number of iterations is set as 10,000Step 2.The initial weights between the hidden layer and the output layer are set as 1. The network output is calculated. And the root mean square error between the actual output and the network output is calculatedStep 3.If or the number of iterations reaches , go to Step 7. Otherwise, the fast learning rate is calculated by (29). The weights are updated by Step 4.The activity of the neuron in the hidden layer is calculated. If is greater than the threshold , the neuron is decomposed, and the network structure is adjusted. The initial parameters of the new neuron are set according to (3) and (4)Step 5.The connection strength between neuron in the hidden layer and neuron in the output layer is calculated. When , the connection of and is disconnected. In the hidden layer, the nearest neuron with is found. The parameters of neuron are adjusted as (10)–(12)Step 6.The network output value is updated. The root mean square error between the actual output and the network output is updated. Return to Step 3Step 7.The root mean square error and the network output are calculated

Under this condition, the comparison of performances of VS-RBF, SORBF [35], and RS-RBF [36] is shown in Table 2. The identification effect in the test is shown in Figure 9. The identification error is shown in Figure 10.

In Table 2, the remaining node number in the hidden layer of VS-RBF is the least. And the actual error and training time of VS-RBF are less than those of SORBF and RSRBF.

Figure 9 shows that VS-RBF can well identify the test data. And the output value of VS-RBF coincides with the actual value.

Figure 10 shows the identification error of VS-RBF. The test error is less than 0.014.

Thus, when using the trained neural network for identification, the test errors of SORBF and RS-RBF are larger than that of VS-RBF. The VS-RBF neural network has faster training speed, more compact network structure, and stronger identification ability in Istanbul Stock Exchange Dataset.

6. Evaluation for Sortie Generation Capacity of the Carrier Aircraft

The inputs of the neural network are the evaluation indexes of sortie generation capacity of the carrier aircraft. The output of the neural network is the evaluation value of sortie generation capacity of the carrier aircraft by the expert scoring method. This paper selects the surge operation of the “Nimitz” carrier in 1997 as the object [1].

500 groups of data are selected excluding abnormal data. 400 groups are used in training, and 100 groups are used in testing. The inputs of the neural network are , which are corresponding to the 15 indexes of the evaluation index system, respectively.

The number of output is 1, which value is . The data of each group in the 500 groups of samples included 15 normalized lowest level indexes and one evaluation value in the range of for sortie generation capacity of the carrier aircraft evaluated by experts. Due to limitation of space, 10 groups of samples are given in Tables 36.When the evaluation for sortie generation capacity of the carrier aircraft is closer to 1, it is indicated that the allocation of the sortie generation scheme under the current indexes is more reasonable, and the sortie generation capacity of the carrier aircraft is higher. Otherwise, when the evaluation for sortie generation capacity of the carrier aircraft is closer to 0, it indicates that the allocation of the sortie generation scheme under the current indexes is more unreasonable. And the sortie generation capacity of the carrier aircraft is lower, which means that the scheme should be advanced.

The main steps on the identification of VS-RBF are as follows: Step 1.The number of neurons in the hidden layer is set to 15 initially. The structure of neural network is 15-15-1. The initial number of neurons in the hidden layer is set as 15. The neural network center is selected as 0.1[−7, −7, −7, −7, −7, −7, −7, −7, −7, −7, −7, −7, −7, −7, −7; −6, −6, −6, −6, −6, −6, −6, −6, −6, −6, −6, −6, −6, −6, −6; −5, −5, −5, −5, −5, −5, −5, −5, −5, −5, −5, −5, −5, −5, −5; −4, −4, −4, −4, −4, −4, −4, −4, −4, −4, −4, −4, −4, −4, −4; −3, −3, −3, −3, −3, −3, −3, −3, −3, −3, −3, −3, −3, −3, −3; −2, −2, −2, −2, −2, −2, −2, −2, −2, −2, −2, −2, −2, −2, −2; −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1, −1; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1; 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2; 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3; 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4; 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5; 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6; 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7]. The variance is 0.5. The initial function width is 1. The learning accuracy is 0.001. The maximum number of iterations is set as 10,000Step 2.The initial weights between the hidden layer and the output layer are set as 0.5. The network output is calculated. And the root mean square error between the actual output and the network output is calculatedStep 3.If or the number of iterations reaches , go to Step 7. Otherwise, the fast learning rate is calculated by (29). The weights are updated by Step 4.The activity of the neuron in the hidden layer is calculated. If is greater than the threshold , the neuron is decomposed, and the network structure is adjusted. The initial parameters of the new neuron are set according to (3) and (4)Step 5.The connection strength between neuron in the hidden layer and neuron in the output layer is calculated. When , the connection of and is disconnected. In the hidden layer, the nearest neuron with is found. The parameters of neuron are adjusted as (10)–(12)Step 6.The network output value is updated. The root mean square error between the actual output and the network output is updated. Return to Step 3Step 7.The root mean square error and the network output are calculated

The trained VS-RBF neural network makes the final evaluation results closer to the evaluation results of the expert, which indicates that VS-RBF can not only replace experts to evaluate the sortie generation capacity of the carrier aircraft but also guarantee the high efficiency of evaluation. What is more, the trained VS-RBF can avoid human errors and the subjective effect on the evaluation process. At the same time, the trained VS-RBF neural network is used to establish the complex nonlinear relationship between the 15 indexes and the evaluation result. By using the nonlinear model of the indexes and the evaluation result, the influence of each index on the evaluation result can be further determined so as to provide the reference and suggestion for improving the sortie generation capacity of the carrier aircraft.

The linear transformation method is used to convert the original sample to the sample in , which is divided into two cases: (1)When the index value is bigger, the sortie generation capacity of the carrier aircraft is better. Linear transformation is expressed in (2)When the index value is smaller, the sortie generation capacity of the carrier aircraft is better. Linear transformation is expressed in where is the normalized sample value, is the original sample value, and and are the maximum value and the minimum value of the current system, respectively

The initial parameters of VS-RBF, SORBF, and RS-RBF are the same. The initial weights are arbitrary values. The initial center is arbitrary value from 0 to 1. The initial function width is given to 1. The learning accuracy of the neural network is 0.001. The maximum number of iterations is 10,000.

The average performances of VS-RBF, SORBF, and RS-RBF after 50 trains are shown in Table 7. The mean square errors of VS-RBF, SORBF, and RS-RBF in the training process are shown in Figure 11. The changes of the number of neurons in the hidden layer of VS-RBF, SORBF, and RS-RBF are shown in Figure 12. The comparisons of the network output values and the actual output values of VS-RBF, SORBF, and RS-RBF in the testing process are shown in Figure 13. The errors between the network output values and the actual output values for VS-RBF, SORBF, and RS-RBF are shown in Figure 14.

The simulation shows that VS-RBF can accurately evaluate the sortie generation capacity of the carrier aircraft. Figures 13 and 14 show that evaluation value of VS-RBF agrees with the actual evaluation value. The error is less than 0.01. The evaluation error of VS-RBF is less than those of SORBF and RS-RBF. It proves the effectiveness of the evaluation of the sortie generation capacity of the carrier aircraft with VS-RBF.

Figure 12 and Table 7 show that the average training time of VS-RBF is shorter than those of the other two neural network algorithms. And the final structure of the network of VS-RBF is most compact, which illustrates the effectiveness of VS-RBF in structural adjustment of neural networks. Table 7 shows that the evaluation error of VS-RBF is smallest, which shows that VS-RBF has good generalization ability.

6.1. Analysis for Sortie Generation Capacity of Carrier Aircrafts

The nonlinear VS-RBF model of the indexes and evaluation result not only can be used for evaluating a given sortie generation scheme but also can further determine the effect of various indexes on the evaluation result. The scheme with basic configuration of surge operation of “Nimitz” carrier in 1997 is selected as a reference scheme, which is shown in Table 8.

On this basis, the index is, respectively, adjusted as 0, 0.25, 0.5, 0.75, 1, and sortie generation capacity of the carrier aircraft is evaluated.

The 15 indexes are adjusted, respectively, and the influence curve and slope of each index on the sortie generation capacity are obtained, which are shown in Figure 15 and Table 9.

The simulations show that the VS-RBF model can be used to quickly get the influence curve of each index on the sortie generation capacity, which can be used to directly determine the factors which have a great influence on the sortie generation capacity.

Table 9 shows the fitting slope and absolute value of slope for each index. The greater the absolute value of the slope, the greater the influence of the index on the sortie generation capacity. Otherwise, the smaller the absolute value of the slope, the smaller the influence of the index on the sortie generation capacity is.

Figure 15 shows that when PTP, SCP, PIPA, SGRA, or RI changes, the change of sortie generation capacity is great. So these indexes should be allocated at first in planning the scheme and guaranteed to remain the good state in the exercise or war, which will keep the sortie generation capacity high.

At the same time, Figure 15 also shows that when ESGR, SSGR, MTPWP, MTPWR, or PUR changes, the change of sortie generation capacity is small. The requirements of these indexes can be lightly reduced, which will ensure that the limited resources focus on those indexes with great influence on the sortie generation capacity and achieve the reasonable allocation of resources with low cost.

7. Conclusions

This paper proposes a VS-RBF network with the fast learning rate aimed at structuring optimization design and parameter learning algorithm of RBF. At the same time, the convergence analysis of VS-RBF is given to ensure the accuracy of the RBF neural network. By comparing with other self-organizing RBF neural networks, the following conclusions are obtained: (1)VS-RBF can automatically adjust the structure of the RBF neural network according to the complexity of the object. It obtains a compact RBF neural network and strong dynamic response capability(2)The learning algorithm of parameters adapted to the structure adjustment is obtained. The fast learning rate algorithm and the robust regression algorithm improve the convergence speed of the RBF neural network(3)The convergence analysis of the VS-RBF neural network is given. And VS-RBF has good convergence and stability(4)Compared with several other self-organizing RBF neural networks, the proposed VS-RBF has the advantages of a compact structure, strong approximation ability, and self-organization ability. The evaluation of sortie generation capacity of the carrier aircraft with VS-RBF provides technical support for the evaluation of complex systems

To sum up, the VS-RBF proposed in this paper can effectively solve the problem of structure design and parameter learning algorithm of the RBF neural network. And the approximation of the typical nonlinear function and the evaluation of the sortie generation capacity of the carrier aircraft are realized.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.