Three neural networks for prestressed fiber-reinforced polymer/plastics sheet-reinforced concrete beams

ABSTRACT This paper introduces a method for predicting the flexural ultimate load of prestressed fiber-reinforced polymer/plastics sheet-reinforced concrete beams. The main work of this paper is as follows: three artificial neural network models of back propagation, Levenberg-Marquardt, and Bayesian Regularization predict the flexural ultimate loads of prestressed FRP-RC and were constructed using a database of 243 concrete beams from various previous studies. The optimal hidden layer node of the neural network is determined as 17 using MSE value, R-value and average error as evaluation metrics. The optimal value of each parameter in the neural network is determined using MSE value as metrics. A series of simulations showed that the Bayesian Regularization model with 17 nodes in the hidden layer best matches the experimental results with root mean square error and correlation coefficient (R) values of 0.002256 and 0.97972, respectively, and an average error of 8.79%. The contribution analysis of the input variables indicates that the flexural capacity of the RC beam was most significantly influenced by its width, elastic modulus, and lower reinforcement, with relative importance of 10.38%, 9.88%, and 9.56%, respectively. Noisy data in the input layer is eliminated using Singular Value Decomposition, optimizing the performance of neural networks.


Introduction
Sheet reinforcement is one of the most effective means of increasing the load-carrying capacity of a structure by improving its compressive and flexural properties as well as providing effective seismic protection for the structure (Wang and Zhou 2023b).And since the 1990s, fiber-reinforced polymer/plastics (FRP) has been at the forefront of research.The FRP fiber material has many advantages, such as being lightweight, easy to construct, corrosion resistant, good durability, high strength-to-mass density ratio, and good fatigue performance, which is ideal for concrete structure reinforcement (Ye and Peng 2006).Compared with ordinary FRP reinforcement, bonded prestressed FRP can improve the reinforced concrete (RC) beam performance and ultimate load capacity by mitigating the problems of inadequate high-strength properties and stress hysteresis (Lu et al. 2006).
Flexural strength is an important basis for structural design and construction and is one of the most important concrete properties.The concrete quality is directly related to evaluating and accepting subprojects, divisional works, and unit works.However, traditional experimental methods to measure the flexural strength of concrete, from sampling to conducting tests and analyzing the results, not only require considerable manpower and material resources but also require a long time.This makes it impossible to use the information most closely related to the current project in project management, which is unfavorable to speeding up construction progress and conserving construction funds and materials.Therefore, empirical prediction formulas have been used to meet the requirements of timely decisions during construction for many years.However, these formulas only consider certain quantitative factors for judgment, and some factors affecting the formula externally can only be generalized to their limits, resulting in the detection and calculation results that deviate wildly.The poor accuracy of traditional prediction methods makes it challenging to apply universally in practice.Scholars have studied the performance of structures through data analysis methods (Wang and Zhou 2023a).Therefore, this paper introduces an artificial neural network (ANN) to study prestressed FRP-RC beams.
The ANN is a nonlinear information processing tool that can learn from existing large data sets that may contain multiple predictors.Its ability to perform extensive mathematical mappings by recognizing patterns or classifying data, even with CONTACT Lidan Fan lidanfan@hpu.edu.cnSchool of Civil Engineering, Henan Polytechnic University, No.2001 Shiji Road, Jiaozuo, Henan 454003, China incomplete information, sets it apart from conventional problem-solving approaches.As the primary model for predicting flexural ultimate loads in RC, the ANN is a data-driven model with stronger capabilities to handle highly nonlinear and complex problems over other methods.For RC, the flexural ultimate load is influenced by various factors, such as material properties and geometric parameters.The ANN can effectively capture the complex relationships between these factors and has a strong capacity to adaptively adjust its weights and biases during training to improve the prediction accuracy.This adaptive learning capability allows ANNs to have better generalization performance and less overfitting than other traditional models.
A typical neural network topology includes an input layer that passes data to the hidden layers, performs computations, and transmits information to the output layer.Depending on the desired ANN architecture, the hidden layers or computation nodes consist of one or more layers (Nolan 2022).Numerous industries, including civil engineering, have seen a rise in neural network and machine learning (ML) studies in recent years.ML uses learning algorithms to uncover input and output links of complex nonlinear systems.Arash Teymori Gharah (Teymori Gharah Tapeh and Naser 2023) introduced standard algorithms and techniques of artificial intelligence (AI), ML, and deep learning for structural engineers.Akinosho and Oyedele (2020) performed a scientometric analysis of over 4,000 scholarly publications and structural health monitoring, damage detection, and structural material performance predictions collected from over 200 sources to identify best practices in terms of procedures, performance metrics, and dataset sizes as well as a review of past and recent efforts to apply AI derivatives to various subfields of structural engineering.Their results indicate that ANNs and genetic algorithms have the largest share of publications, occupying about 55.9%.Numerous common neural network architectures and their current applications in the construction sector were presented.They revealed how neural networks have been effectively used in construction with potential future applications with focus on any difficulties that may arise.Dogan and Birant (2021) briefly overviewed various types of ML techniques, examined current trends in manufacturing, discussed the special benefits of ML in the workplace, and pinpointed current applications along with the most effective methods for necessary tasks.
Researchers are now working on more precise and complex prediction models, which includes various hybrid models that consider time series.Yuan et al. (2018) investigated the accuracy of a hybrid long shortterm memory neural network and ant lion optimizer model (LSTM-ALO) to predict monthly runoff.Scatter plots and box line plots evaluated the performance of other models with the LSTM-ALO, indicating the LSTM-ALO has a higher accuracy and provides an effective method for monthly runoff forecasting.Adnan et al. (2022) proposed a hybrid algorithm, ANFIS-GBO, to accurately estimate flow in a mountainous watershed.The GBO algorithm improves the prediction accuracy of ANFIS more than other benchmark methods.Adnan et al. (2021) proposed a hybrid model for predicting monthly runoff forecasts.Comparing the ELM-PSOGWO with ELM, ELM-PSO, ELM-GWO, and ELM-PSOGSA methods show that the ELM-PSOGWO method is more successful than the other benchmark models.Ikram et al. (2022) accurately predicted renewable, environmentally friendly, and carbon-free energy resources.The improved version of the multiverse optimizer algorithm (IMVO) was utilized with the integration of the least square support vector machine for better tuning of the hyperparameters.Adnan et al. (2022) integrated the simulated annealing algorithm with the mayfly optimization algorithm (SAMOA) to determine the optimal hyperparameters for support vector regression (SVR) to overcome the exploration weakness of the mayfly optimization algorithm method and the hybrid SVR-SAMOA model to accurately predict water flow.Ikram et al. (2022) developed the extended marine predator algorithm (EMPA)-based ANN (ANN-EMPA) as a novel hybridized ML method for streamflow estimation in the Upper Indus Basin, a key mountainous glacier melt-affected basin of Pakistan.In civil engineering, Ikram et al. (2022) utilized an Extreme Learning Machine network based on the Chaos Red Fox Optimization Algorithm (ELM-CRFOA) to predict the shear strength of RC beams with composite steel.The accuracy of the method was demonstrated by comparing model predictions with cumulative data and available shear design equations.A sensitivity analysis was performed to investigate the effects of input parameters on the shear strength of FRP-RC beams.
Many researchers have adopted ANN techniques in civil engineering to predict concrete mechanical performance (Mashhadban, Kutanaei, and Sayarinejad 2016;Cascardi, Micelli, and Aiello 2017;Congro 2021;Bhagwat and Nayak 2023;Li and Deng 2004;Ahmad 2020;Ashrafi 2010) used five ANNs with the Bayesian Regularization (BR) algorithm to predict the residual strength of fibrous concrete under bending loads.Predictions of the best neural networks well-matched test load results.Bhagwat et al. (2023) used three resilient back propagation (BP) neural networks to predict the flexural strength of corroded prestressed concrete beams.The proposed model converges before the other backpropagation algorithms.The height and span of the beam are important factors affecting the ultimate load, ultimate bending moment, and deflection, respectively.Li and Deng (2004) used test data and applied the principle of the BP neural networks to establish a model that predicts the flexural load capacity of steel fiber concrete beams reinforced with carbon fiber fabric.Ahmad (2020) developed a framework for developing an ANN capable of realistically predicting the loadcarrying capacity of RC members.Reza Ashrafi, Jalal, and Garmsiri (2010) used neural network techniques to predict load-displacement curves and the concrete compressive strength based on mixed proportions, all with satisfactory results.However, there is no prediction of the flexural performance of prestressed FRP-RC beams.Therefore, in order to be able to accurately predict the flexural ultimate load of prestressed FRP reinforced reinforced concrete beams for the design of concrete beams.In this paper, 243 sets of experimental data were collected and three ANNs, including traditional BP neural network, Levenberg-Marquardt (L-M) neural network and BR neural network, were used to predict the flexural ultimate load of prestressed FRP-RC beam.The effect of factors in the input layer on the flexural ultimate load of prestressed FRP-RC beam has been investigated.The details of the work are as follows: • The optimal structure of the neural network model and optimal parameters are determined.• Three neural network models are trained, and their performances are compared.• Contributions of the input variables are analyzed to investigate the influence of the input variables on the output variable to comprehensively understand the prediction models.• The noisy data in each model were eliminated using the singular value decomposition (SVD) method.

Data sources and distribution
To accurately predict the ultimate flexural load capacity of prestressed FRP-RC, 243 sets of test data were collected from existing studies (Du 2005;Wang 2003 Thirteen influencing factors were chosen as the input layer of the neural network.These are the width (t f ), thickness (b f ), bond length (L), tensile strength (f t ), modulus of elasticity (E f ), prestress applied to FRP sheet (P f ), width and height of the RC beam section (b c and h c ), upper reinforcement (A s ), lower reinforcement (A s (b) ), modulus of elasticity (E), compressive strength of concrete (f cu ), tensile strength of concrete (f u ), etc.The only output for network model is the corresponding flexural ultimate load capacity of the strengthened RC beam (P u ) . Figure 1 clearly illustrates the range of 13 input variables and 1 output variable.(Where the number on the histogram indicates the range in which the variable is located and the number within the histogram indicates the number of that variable within that range.)

Model evaluation metrics
Existing literature on the performance indexes indicate that the influence of the magnitude is eliminated (Botchkarev 2019;Naser 2021) in the neural networks due to data normalization.To avoid mutual offsets of positive and negative errors, four absolute value errors or squared errors, such as the mean absolute error (MAE), mean square error (MSE), correlation coefficient (R), and root mean square error are selected as the performance evaluation indexes of three neural networks.The R represents the degree of fit of the regression results to the measured values.A closer value to 1 means a better fit.The root mean square error and MSE values reflect the degree of difference between the predicted and measured values, with a smaller value indicating a greater model accuracy.To facilitate designers' intuition with the prediction accuracy and neural network efficiency, the performance index mean absolute percentage error (MAPE) and comparisons of the CPU time consumed at the end of training the algorithm are added to visually evaluate the neural network.The specific required calculation formulas are: where: Accessisdenied denotes the number of samples y; � y; y � , and y � represent the actual sample value, sample mean, model predicted value, and model predicted mean, respectively.

BP neural network
The BP neural network is a multilayer feedforward approach trained with the error BP algorithm, which is one of the most widely used models.The BP neural network learns and stores several input-output pattern mapping relationships without revealing the associated mathematical equations beforehand.Its learning rule uses gradient descent to continuously adjust the weights and thresholds of the network via BP to minimize the sum of squared network errors.The BP neural network model includes the input, hidden, and output layers.

L -M neural network
The L -M neural network achieves convergence characteristics by adaptively adjusting the damping factor.This approach has a higher iterative convergence speed and obtains stable and reliable solutions in many nonlinear optimization problems.The original BP learning algorithm uses gradient descent, and the parameters move opposite to the error gradient so that the error decreases until satisfying the requirement.The L -M algorithm is a combination of gradient descent and the Gauss -Newton method with both local convergence of Gauss -Newton and global characteristics of gradient descent.Its gradient descent is fast with fewer training steps than the traditional BP neural network.So, the L -M algorithm optimizes the BP algorithm (Wei Peng 2016).The following are some brief descriptions of the iterative approach of the L -M algorithm: where ω denotes the network weights and thresholds, ω k denotes the vector consisting of the weights and thresholds of the kth iteration, ω kþ1 denotes the vector consisting of the new weights and thresholds, Δω denotes the weight increments, μ denotes the userdefined learning rate, I denotes the unit matrix, eðωÞ denotes the error, and JðωÞ denotes the Jacobian matrix as: In practice, μ is a trial parameter.For a given μ, if the resulting Δω decreases the error indicator function EðωÞ, then μ decreases.Otherwise, μ increases.Therein, the error indicator function can be quickly reduced to its minimum value.The error indicator function is given by: where Y i denotes the desired network output vector, Y i 0 denotes the actual output vector, and P denotes the number of samples.

BR neural network
The BR neural network combines the traditional least squares error function with an additional regularization term.This gives the equation: where the terms α and β are regularization parameters or hyperparameters, E w is the penalty term for large weights, and m is the number of weights.In the Bayesian framework, the learning process considers weight vector uncertainty by assigning a probability distribution representing the relative confidence for different values.The function is initially set to some prior distribution.Once the data are observed, they can be converted into a posterior distribution using Bayes' theorem as: where PðwjαÞ is the prior density indicating the confidence level in the weights before any data are collected, PðDjw; βÞ is the likelihood function giving the error probability, and PðDjw; βÞ is the normalization factor also called the evidence of the model equation.Equation ( 11) can be expressed in words as: The optimal weights of the model are obtained by maximizing the posterior probability during training.This corresponds to the regularized objective function with the minimization of Equation ( 10).Assuming that the weights and data probability distribution are Gaussian, the prior probability of the weights can be written as: Similarly, the probability of the error can be written as: Then, the final posterior distribution can be obtained as: For the regularization parameters α and β, we use Bayes' theorem to infer the optimal value of the regularization parameter from the data.
where Pðα; βÞ is the prior probability of the regularization parameters α and β, PðDjα; βÞ is the likelihood term called the evidence for α and β, where α is the optimal value obtained based on Equation ( 11).The effective parameters are given by: where m is the number of parameters, and A is the Hessian matrix of the objective function SðwÞ.
In the Bayesian framework, an optimization process finds the optimal weights in Equation ( 10) and the optimal values of in Equation ( 16).The iterative process is given as follows.
(1) Select the initial values and weights for α and β.
(2) Adopt the L -M algorithm to find the weights that minimize the objective function in Equation ( 5).
(3) Calculate the valid number of parameters γ and the new values of α and β. (4) Repeat steps ( 2) and (3) until convergence.

Neural network structure
The modeling problem of predicting the flexural ultimate load capacity of prestressed FRP-RC is attributed to the nonlinear input-output relationship between the flexural ultimate load capacity of RC and its influencing factors.Thus, the problem of predicting the flexural ultimate load capacity of concrete is summarized as the establishment of a multi-input singleoutput network structure.Thirteen influencing factors, including the compressive strength of concrete, have been selected as the input layers of the neural network.The corresponding flexural ultimate load capacity of the RC beam after strengthening is used as the single output to build the network model.The key to determining the network structure is to select and determine the appropriate number of hidden layers and neurons.For given input and output layers, having too few hidden layers and neurons will readily lead to underfitting, that is, insufficient model fitting.Too many hidden layers and neurons will lead to overfitting, i.e., too strong model fitting (Guoliang 2021).Therefore, when determining the neural network structure, the number of hidden layers and neurons is an important factor that affects the prediction results.As this study is a simple data set that does not involve time series, the number of hidden layers is one, and the number of neurons in this hidden layer is based on the empirical formula: where h is the number of nodes in the hidden layer, n is the number of nodes in the input layer, and m is the number of nodes in the output layer.
Therefore, the number of neurons in the hidden layer is selected as between 4 and 22 based on the number of nodes in the input layer being 13 and the number of output nodes being 1.The BP neural network is taken as an example.The MSE, R, and average error percentage of the validation set are used as the evaluation indexes of the prediction results to find the optimal number of hidden layer nodes.The results are shown in Figure2(c).The best comprehensive performance is achieved with 17 neurons in the hidden layer, with an MSE of 0.023383, R of 0.94351, and the average error of 30.71%.This structure was used in L -M and BR neural networks to predict the flexural ultimate load capacity of RC beams.The validation set error at hidden layer node 17 is shown in Figure 2(d).Most of the validation set errors are concentrated around − 30%.The structure of the neural network model is illustrated in Figure 3.In this case, the input layer nodes are the 13 influencing factors of the ultimate flexural load of concrete, which is determined as 13, the number of hidden layers is 1 layer, the hidden layer nodes are 17, and the output layer has the ultimate flexural load of concrete as a single output node.

Neural network parameters
The neural network parameters critically impact the performance and prediction results, so they need to be tuned for performance optimization.Four important parameters of the neural network are selected for tuning.The learning rate determines the step length and direction of the weight updates in each iteration.If the learning rate is too small, the network converges slowly and requires many iterations to achieve a good fit.If the learning rate is too large, the weights may oscillate and fail to converge to the global optimal solution.The momentum coefficient accelerates the convergence process and enhances the network stability.The network may fail to converge or overfit if the momentum coefficient is too large.If the momentum coefficient is too small, the convergence speed of the network may be reduced and it may easily get stuck at local optima.The regularization coefficient can control the network complexity and prevent overfitting.A regularization coefficient that is too large may lead to underfitting.A regularization factor that is too small may not be able to effectively control the network complexity, affecting the generalization ability.The number of iterations determines the rounds required for training.If there are too few iterations, the desired fitting effect may not be achieved.Too many iterations may lead to overfitting or excessive training times.
The validation set MSE is used as the performance evaluation index of the neural network to observe the effects of variations in the four parameters on the performance.One parameter is adjusted while the others are held constant as default values.The BR neural network is based on the principle of Bayesian statistics, effectively avoiding overfitting by introducing an a priori distribution to control the weight update compared with traditional neural networks.The BR neural network does not necessarily use the validation set for model selection and tuning and instead uses the default parameters.Therefore, this paper only tuned the four parameters in the BP and L -M neural networks.The results are shown in Figure 4.
The optimal parameters for each network are shown in Table 1.This paper utilizes the sigmoid function as Þ for the neural activation, which returns a value from 0 to 1 in the output layer.The BP neural network achieves the best MSE values for the validation set at 10,000 epochs, learning rate of 0.05, momentum factor of 0.8, and regularization factor of 0.01.The optimal MSE values for a single parameter control were 0.0327, 0.0257, 0.02748, and 0.0389, respectively.The L -M neural network achieved the optimal MSE values for the validation set with 100,000 epochs, learning rate of 0.01, momentum factor of 0.7, and regularization factor of 0.001.The optimal MSE values under a single parameter control were 0.001475, 0.00238, 0.002829, and 0.00198, respectively.

Prediction performance of neural network models
After determining the neural network structure and parameters, the flexural ultimate loads of the prestressed FRP-RC are predicted using the L -M and BR neural networks.The prediction results are compared with experimental values, as shown in Figures 5 and 6, and the error percentage distributions are shown in Figures 7 and 8.
From Figures 5 and 6, the prediction results of both the L -M and BR neural networks are similar to the experimental values.From Figures 7 and 8, More than 65% of the prediction error percentages for L-M and BR neural networks are in the interval [−10%, 10%] and more than 85% of the prediction error percentages are in the interval [−20%, 20%].

Input variable contributions to the output variable
Analysis of contributions of the input variables was carried out to investigate the influence of the input variables on the output variable and thus to have a comprehensive understanding of the prediction models.The input variable contribution can be evaluated the relative importance (RI), which can be calculated using the analysis method proposed by Garson (Garson 1991) as presented in Eq. ( 19).Based on the analysis of the prediction performance of the three neural networks, the contribution of the input variables to the output variables is analyzed with the best-performing BR neural network.The weights and biases of the BRNN model are extracted as follows: where w 1 is the weight matrix of the input layer to the hidden layer, b 1 is the bias vector of the hidden layer, w 2 is the weight vector of the hidden layer to the output layer and b 2 is the bias of the output layer.
where RI i represents the RI to the ith input variable, w 1;ij is the connection between the ith input variable and jth neuron in the hidden layer, w 2;j is the connection between the jth neuron in the hidden layer and the output variable, and N and K are the numbers of input variables and neurons in the hidden layer, respectively.Figure 11 illustrates the RI of the input variables in the BRNN model.Among the 13 influencing factors, the ones that are more impactful are from the concrete.The beam width, concrete modulus of elasticity, and the lower reinforcement have the greatest influences on the flexural load-carrying capacity with RIs of 10.38%, 9.88%, and 9.56%, respectively.In contrast, the characteristic importance of the FRP sheet width, modulus of elasticity, and tensile strength are lower at 3.96%, 5.59%, and 6.04%, respectively.None of the FRP reinforcement parameters are determining factors for the flexural load-carrying capacity of RC beams.The phenomenon is reasonable as the effect of the FRP reinforcement is only mentioned based on the original flexural load capacity of the RC beams, while the decisive role is still played by the initial beam parameters (Liu et al. 2017;Zhao et al. 2018).
Among the relevant parameters of the FRP, the width and length are the most influential on the reinforcement effect, which coincides with the results of the relevant FRP-RC beam tests (Yang 2006).

Decomposition method to handle noisy data
The SVD techniques are utilized to eliminate noisy data from the input layer data and observe changes in the neural network performance (Razzaghi, Madandoust, and Aghabarati 2021).The SVD is widely used in the field of ML.It is not only used for feature decomposition in dimensionality reduction but also for recommendation systems, natural language processing, and others.This is the cornerstone of many ML algorithms.The SVD decomposes the training data matrix to obtain three matrices U, D, and V that combine to achieve the original matrix as: A threshold of 0.01 is chosen in which all terms in D with singular values less than the threshold are set to 0 to construct diagonal matrix D 0 .The original matrix A is updated using D 0 to obtain the matrix: This new matrix A 0 is called the denoised matrix and provides the data to retrain the neural network model.Based on the above method, the 13 × 243 input layer matrix containing 13 input layer variables is set up as A.
It is SVD decomposed into A ' matrix and then reintroduced into each neural network model to predict the flexural ultimate load of concrete.The R and MSE are used as the performance evaluation indexes of the neural network, as shown in Table 3, indicating the performance of each neural network has improved.The BP neural network has the largest performance improvement with a 17.59% reduction in MSE and a 1.724% increase in R.

Conclusion
Select ANN models based on three algorithms are used to predict the ultimate flexural loads of prestressed FRP sheets for RC beams.The performances of the three neural networks were evaluated after determining the effects of the network structure and parameters.BR neural network prediction was identified as a method that can accurately predict the flexural ultimate bearing capacity loads of prestressed FRP-RC.The contributions of the input variables were analyzed, and the SVD method was used to eliminate noisy data in the input layer.The following conclusions are drawn based on the results.
• The optimal hidden layer nodes for the three neural networks were determined to be 17 using the MSE value, R-value, and average error of the validation set as evaluation metrics.The optimal parameters of BP neural network and L-M neural network were determined using the MSE value of the validation set as an evaluation metric.The BP neural network achieves the best MSE values for the validation set at 10,000 epochs, learning rate of 0.05, momentum factor of 0.8, and regularization factor of 0.01.The L-M neural network achieved the optimal MSE values for the validation set with 100,000 epochs, learning rate of 0.01, momentum factor of 0.7, and regularization factor of 0.001.The BR neural network will update the parameter distributions during training through Bayesian inference, so no parameter tuning is performed.• After determining the accuracy of the prediction models, a feature importance analysis was performed using the weights and thresholds of the BR neural network to justify the model.The beam width, concrete modulus of elasticity, and bottom reinforcement were the most influential factors on the flexural capacity of the RC beam, with RI of 10.38%, 9.88%, and 9.56%, respectively.Among the parameters related to the FRP reinforcement, the width and length of the FRP and applied prestress had the greatest influences on the reinforcement effectiveness, with RI of 7.84%, 6.52%, and 6.45%, respectively.Experimenters may consider prioritizing parameters that have high impacts on the flexural ultimate load to improve the flexural performance of concrete beams.• The three neural network models were then trained after eliminating noisy data in the input layer using the SVD method.The results showed that the performances of all three neural network models improved to some extent, with the BP neural network showing the largest improvement with a 17.59% decrease in the MSE and a 1.724% increase in R.
Although the model exhibits good performance, there are still some shortcomings.The prediction accuracy should be improved by collecting additional data or considering increasing the number of influencing factors in the input layer.All proposed models use the most common sigmoid function in all layers.The proposed models can be evaluated using other activation functions (such as linear, tangent hyperbolic, and ReLU).The neural

Figure 1 .
Figure 1.Distribution of the input and output variables of the neural network.

Figure 2 .
Figure 2. Performance of the BP neural network with different hidden layer nodes.

Figure 3 .
Figure 3. Prediction results of the prestressed FRP-RC flexural ultimate load neural network model.

Figure 4 .
Figure 4. Effect of parameter changes on the performance of the neural network models.

Figure 5 .
Figure 5. Predicted values from the L-M neural network.
, respectively, to better compare the prediction results of the three neural networks.In the Taylor plots, In the Taylor plot, the standard deviation of the test values = 0.1696315 with MSE = 0 and R = 1 used as plotting references.The BR neural network has a better prediction performance than the other two networks.

Figure 6 .
Figure 6.Predicted values from the BR neural network.

Figure 9 .
Figure 9. Violin plots of each neural network versus test values.

Figure 10 .
Figure 10.Taylor diagram for each neural network.

Figure 11 .
Figure 11.Relative importance of the input variables for the BRNN model.

Table 2
compares the prediction results of the

Table 1 .
Model architectures of the considered neural networks.

Table 2 .
Neural network performance comparisons.

Table 3 .
Performance change before and after SVD optimization.