AI-based Lagrange optimization for designing reinforced concrete columns

ABSTRACT Structural engineers face several code-restricted design decisions. Codes impose many conditions and requirements to the designs of structural frames, such as columns and beams. However, it is difficult to intuitively find optimized solutions, while satisfying all code requirements simultaneously. Engineers commonly make design decisions based on empirical observations. Optimization techniques can be employed to make more rational engineering decisions, which result in designs that can meet various code restrictions simultaneously. Lagrange optimization techniques with constraints, not based on explicit parameterization, are implemented to make rational engineering decisions and find minimized or maximized design values by solving nonlinear optimization problems under strict constraints imposed by design codes. It is difficult to express objective functions analytically directly in terms of design variables to use derivative methods, such as Lagrange multipliers. This study proposes the use of neural network to approximate well-behaved objective functions and other output parameters into one universal function that can also give a generalizable solution for operating Jacobian and Hessian matrices to solve the Lagrangian function. The proposed method was applied successfully in optimizing a cost of a reinforced concrete column under various design requirements. An efficacy of optimal results was also proven by 5 million datasets. Graphical Abstract


Introduction
Several studies, including (Aghaee, Yazdi, and Tsavdaridis 2014), (Fanaie, Aghajani, and Dizaj 2016), , (Nasrollahi et al. 2018), and (Paknahad, Bazzaz, and Khorami 2018), have been conducted to optimize reinforced concrete (RC) structures. These studies mainly focused on minimizing the manufacturing and construction costs; only a few considered structural capabilities against external forces, which are influenced by design codes. Studies of the optimization of RC beams have been reported by (Shariati et al. 2010), (Fanaie, Aghajani, and Shamloo 2012), (Toghroli et al. 2014), (Awal, Shehu, and Ismail 2015), (Kaveh and Shokohi 2015), (Safa et al. 2016), (Shah et al. 2016), (Korouzhdeh, Eskandari-Naddaf, and Gharouni-Nik 2017), and (Heydari and Shariati 2018). (Barros, Martins, and Barros 2005) presented stressstrain diagrams using the conventional Lagrangian multiplier method (LMM) to develop nominal moment strengths based on the optimal area of the upper and lower sections of steel for four classes of concrete. (Shariat et al. 2018) obtained analytical objective functions for the cost of frames as a function of the design parameters for structural systems in a limited circumstance; however, it is generally very difficult to derive analytical objective functions that represent the entire behavior of structural components such as columns and beams. (Villarrubia et al. 2018) employed artificial neural networks (ANNs) to approximate objective functions to optimized analytical objective functions. They approximated objective functions using nonlinear regression that can optimize problems and employing a multilayer perceptron when the use of linear programming or Lagrange multipliers was not feasible.

Significance of the study
There are numerous available computer-aided engineering tools, including CAD packages, FEM software, selfwrite calculation codes, that are used to study the performance of structures. Objective functions, however, could constitute a mixture of numerical simulations, analytical calculations, and catalog selections, which makes it difficult to apply differentiation to derivative optimization methods, such as the Lagrange multiplier. Some nonderivative optimization methods such as Genetic algorithms were applied in structural design problems (Rajeev and Krishnamoorthy 1998) (Camp, Pezeshk, and Hansson 2003) as they do not require any derivatives to find an optimal solution. However, computing times of nonderivative methods heavily rely on the computational speed of engineering tools because each trial requires one run of software. In this study, the use of artificial neural networks was adopted to universally approximate objective functions obtained from any computer-aided engineering tools. New objective functions, hence, not only enhance the computational speed compared to conventional software but also can be differentiated and implemented in Lagrange optimization.
Optimization and sensitivity analyses using computational LMM can be based on ANNs. The optimized results were verified using rectangular RC columns. The analysis was conducted to obtain the minimum design cost for reinforced concrete columns, as specified by the American Concrete Institute (ACI) regulations (2014) (ACI-318 2014). Moreover, a sensitivity analysis was performed on the cost with respect to the effective parameters, including the rebar ratios and failure criteria. Accordingly, various failure criteria were developed to be used in designing RC columns. Numerical examples were also presented to better illustrate the design steps. Complex but inaccurate analytical objective functions, such as describing the cost of structural frame and emissions of CO 2 , were replaced by ANNs-based objective functions. The sensitivity analysis of the LMM revealed that the best optimal values based on constraints can be identified for specific situations. Optimization using artificial intelligence (AI)-based objective functions based on large datasets without the need for primary optimization knowledge can effectively aid the selection of design parameters for best practices.
The goodness of proposed method is that its performance would be less dependent on problem types such as column, beam, frame, seismic design, etc. but relies on characteristics of big datasets of considered problem. Once big data is good enough to generate approximation objective function as well as other parameters using ANN, an optimization solution is then generalizable by using AI-based Lagrange method. Therefore, applications of proposed method would not restrict to optimizing RC columns only but also can expand to other problems, such as optimizing beam-column connection (Ye et al. 2021), GFRP RC Columns (Sun et al. 2020), RC shear wall (Zhang et al. 2019) (Yazdani, Fakhimi, and Alitalesh 2018) subjected to lateral impact loading (Zhang, Gholipour, and Mousavi 2021), or even severe impulsive loading (Abedini and Zhang 2021), etc.

Lagrange procedures based on ANNs
Joseph-Louis LMM optimizes objective functions with constraints, identifying the saddle point of the Lagrange function, as mentioned by (Walsh 1975) and (Kalman 2009), which can be identified among local stationary points based on the Hessian matrix (which are differentiated twice) (Silberberg and Wing. 2001). To find the stationary (saddle) points of a Lagrangian function, the function must be formed as a function of the constraining input variables and the Lagrange multiplier λ ((Protter and Morrey 1985)). This can be achieved by solving systems of nonlinear differential equations that lead to the identification of the maximum or minimum of the Lagrange function subjected to inequality and equality constraints (Hoffmann and Bradley 2004).

Optimization using LMM and Newton-Raphson method
LMM finds stationary points (saddle points; maximum or minimum of a Lagrange function) when a Lagrange function, L, is considered as a function of the variables x ¼ ½x 1 ; x 2 ; :::; x n � T and the Lagrange multiplier for both equality and inequality constraints, λ c ¼ ½λ c 1 ; λ c 2 ; :::; λ c m � T and λ v ¼ λ v 1 ; λ v 2 ; . . . ; λ v l ½ � T , respectively, as shown in Eq. (1): where f x ð Þ is a multivariate objective function subjected to equality and inequality constraints, The diagonal matrix of the inequality, S (Eq. (2)), activates the inequality to equality if the condition of inequality is not satisfied or deactivate it if considered parameters are within the range defined by inequality constraints.
where s i is the status of the inequality constraint v i ; s i ¼ 1 when v i is active and s i ¼ 0 when v i is inactive. The stationary points of Lagrange functions (Eq. (1)) can then be identified by solving the partial differential equations with respect to x, λ c , and λ v (Eq. (3)), finding slopes of the parallel tangential lines of the objective functions, f x ð Þ, and constraints, c x ð Þ and v x ð Þ.
Ñc m x ð Þ 2 6 6 6 4 3 7 7 7 5 and are the Jacobian of the constraint vectors c and v, respectively, at x. The main advantage of Lagrange multipliers is that they are added to constrained optimization problems to convert the problems into unconstrained optimization problems. Lagrangian functions are formulated based on the relationships between the gradients of objective functions and those of the constraints of the original problems (Beavis and Dobbs,1990) such that the derivative test for unconstrained problems can still be applied. Lagrange algorithms linearize restrictions and objective functions at a specific space point by employing derivatives and partial derivatives that are solved based on equality constraints, as shown in Eq.
(3). The Newton-Raphson method is employed in solving a set of partial differential equations representing a tangential line of Lagrange functions (Eqs. (3) and (5)), ÑL x; λ c ; λ v ð Þ, which needs to be differentiated one more time to linearize the partial differential Lagrange functions with respect to x, λ c , and λ v (Eq. (5)), leading to finding the stationary points of the Lagrange function, L x; λ c ; λ v ð Þ. Linear approximation of tangential line of Lagrange functions can be predicted as x 0 þ Δx, which is very close to x 0 , as shown in Eq. (5), where partial differential equations; ÑL x; λ c ; λ v ð Þ, is differentiable at x 0 . The Newton-Raphson method is based on first-order approximation, which works for any system of equations whose functions are differentiable in the considered region.
� T can be computed from Eq. (6). Generally, the variable x and Lagrange multipliers λ can be updated after every iteration as Eq. (7).
The first derivative, ÑL (1)), is derived as followed: where H f x ð Þ, H c i x ð Þ, and H v i x ð Þ are Hessian matrices of the objective function (f x ð Þ), equality constraint (c i ), and inequality constraint (v i ), respectively, with respect to the variable vector (x). The procedure for Newton-Raphson approximation is repeated until convergence is achieved.

Formulation of universal approximation function using neural network
One problem in Lagrange optimization is that the objective function, f x ð Þ, and/or the output parameters that appear in constraints, c x ð Þ and v x ð Þ, are sometimes complex or impossible to derive into twice-differentiable functions in order to employ the Lagrange optimization method. Even when it is possible, deriving the Jacobian and Hessian matrices of the objective function, as well as the constraints, is not only difficult and expensive, but also a nongeneralizable solution for any optimization problems (e.g. optimizing columns, beams, and/or any structural systems).
In this study, an artificial neural network was employed to approximate any well-behaved objective functions and other parameters into one universal function, as shown in Eq. (10), which can also give a generalizable solution for the Jacobian and Hessian matrices.

Neural network-based universal approximation function
where x is the input (vector of features); L the number of layers, including hidden and output layers; W l ð Þ the weight matrix between layer l À 1 and layer l; b l ð Þ the bias matrix of layer l; and g N ð Þ and g D ð Þ the normalization and denormalization functions, respectively. Minmax normalization function as shown in Equation (10b) is conducted in this study.
where � x is a normalization of x between the minimum and maximum value of � x min ¼ À 1 and � x max ¼ 1, respectively. A coefficient α x is the ratio of normalization data range (� x max À � x min ) to original data range (x max À x min ) as shown in Eq. (10c). Similarly, a denormalization function is expressed in Eq. (10d).
The activation functions (tansig, tanh) shown in Figure 1, f l t at layer l, were implemented to formulate nonlinear relationships between the networks. As mentioned by Goodfellow (Bengio, Goodfellow, and Courville 2017), the hyperbolic tangent activation function (tansig, tanh) generally performs better than a sigmoid activation function. The function takes any real value as input and outputs values in the range −1 to 1. The bigger the input, the closer the output value to 1.0, whereas the smaller the input the closer the output to −1.0. Figure 1 also illustrates first and second derivatives of tansig/tanh activation function, which are needed for Jacobian and Hessian calculations. A linear activation function, f L ð Þ lin , was selected for the output layer because the output values are unbounded. For example, an output of safety factor (SF) varies from 0.5 (normalized as −1) to 2 (normalized as 1) in training datasets, however, SF could be either greater than 2 (normalized as 1) or small than 0.5 (normalized as −1) depending on design values. sigmoid or reLu activation functions for an output layer are even worse for this case because their lower bounds are 0 for any normalized safety factor smaller than 0 (denormalized as 1.25), which means they may have vanishing problems in these ranges. Linear activation at the output layer, on the other hand, predicts output values which is not influenced by activation functions.

Formulation of Jacobian matrix for universal approximation function
In neural networks, universal approximation functions can also be expressed as a series of composite mathematical operations, as shown in Eq. (11a): where z l is the output vector at layer l, which can be calculated using Eq. (11b): where � denotes the Hadamard (element-wise) product operation. α y , � y min , and y min are normalization para- In calculus, a chain rule is employed to calculate derivatives of such composite functions, as shown in Eq. (11). Formally, the Jacobian matrix of z l ð Þ with respect to x can be derived as the Jacobian matrix of The Jacobian matrix for universal approximation, J L ð Þ , is then computed by forward propagation as follows: Size configuration of output vector, Jacobian, and Hessian of hidden layer l.

Formulation of Hessian matrix for universal approximation function
This section summaries derivations of AI based Hessian matrix which was developed by MathWorks Technical Support Department ((MATLAB 2020b). The Jacobian matrix of the hidden layer l and m l ð Þ neurons with respect to n input variables ( a matrix of size m l ð Þ � n. Then, the corresponding Hessian is a third-order tensor as illustrated in Figure 2, which could be an expensive operation. A convenient method is to calculate explicitly the slices of the Hessian of the second derivative and then obtain the full Hessian by accurately reshaping the slices ((MATLAB 2020b)). A slice of the Hessian, H l ð Þ i , is a derivative of Jacobian J l ð Þ with respect to one of the input elements, x i (Eq. (14)).
where @ 2 z l ð Þ =@x i @z lÀ 1 ð Þ can be obtained by applying the chain rule as: The expression @z lÀ 1 ð Þ =@x i can be written as i which is the i-th column of Jacobian J lÀ 1 ð Þ . Substituting Eq. (15) to Eq. (14), we obtain Applying forward propagation in Eq. (16), the slice of Hessian at the final layer, L, can be obtained as: The Lagrange optimization problem presented in Section 3.1 can then be solved in a generalizable way using a neural network to generate AI-based objective functions, Jacobian, and Hessian matrices, as shown in Eqs. (10), (13), and (18), respectively. The concept of AI-based Lagrange Optimization method is explained following three steps as shown in Figure 3.
-Step 1: Structural big datasets are generated from conventional design software, such as AutoCol. A proper number of big datasets needed for training should be selected carefully based on level of complexity of considered problem.
-Step 2: AI-based objective functions are achieved based on neural networks trained on big datasets obtained from Step 1. Accuracies of AI-based model are considerably affected by not only number of big datasets but also neural network parameters, such as number of hidden layers, neurons, and required epochs, etc. Therefore, a proper framework of neural network should be employed to get good training results.
-Step 3: Lagrange multiplier method is applied to optimize AI-based objective functions. The aim of AIbased objective functions is to approximate any well-

Application of AI-based Lagrange method on optimizing RC columns
In this study, a conventional structural software (AutoCol) is employed to generate big datasets for neural network training. Column configuration (b � h), rebar ratio (ρ s ), and material properties are conducted to evaluate structural performance of RC column, such as design axial force (ϕP n ), design bending moment (ϕM n ), and rebar strain (ε s ), etc., against factored load pair (P u -M u ). Figure 4 and

Formulation of objective function and other parameters based on ANNs
The nine output parameters, including the objective function (CI c ), are functions of seven variables that are complex and difficult to not only implement analytically but also find their Jacobian and Hessian matrices for solving the Lagrange function (Eq. (7)). AI-based neural networks are developed to universally approximate all the output parameters, as expressed in Eq. (10), and hence, a unique process of finding their Jacobian and Hessian matrices can be applied. For example, the objective function of column cost index (CI c ) is obtained using Eq. (19) based on the given input parameters b; h; ρ s ; f 0 c ; f y ; P u ; M u À � . The equation is forward-network-based weight-bias functions with L layers and 80 neurons, which are linked using weighted interconnections and bias through an activation function, thereby performing nonlinear numerical computations. An activation function (tansig, tanh), as shown in Figure 4, is used in Eq. (19).
The input, x ¼ b; h; ρ s ; f 0 c ; f y ; P u ; M u � � T , for networks is related to neurons of fully connected successive layers using weights at each neuron and bias in each hidden layer. The layers are then summed up for the outputs, such as CI c , and CO 2 emissions, and W c ((Hong 2019)). The neural networks are formulated to be able to generalize trends (recognized as machine learning) between inputs and outputs to obtain objective functions rather than being based on analytical engineering mechanics or knowledge (Berrais 1999).
Similarly, the rest of the outputs ϕP n ; ϕM n ; SF; b=h; ε s ; CO 2 ; W c ; α e=h À � can also be formulated as functions of seven inputs based on forward ANNs, as shown in Table 2. Table 2 present training results of forward networks, in which three types of hidden layer (1, 2, and 5) with 80 neurons are implemented. Structural datasets of 100,000 are randomly divided into three small subsets: training set, validation set, and test set. According to Brian Ripley (Ripley 1996), a training set (70% of big datasets) is a set of examples used for learning, that is to fit parameters, whereas a validation set is used to tune a neural network to avoid overfitting. A test subset, on the other hand, is independent of the training dataset, which does not affect training procedure. It is, therefore, used only to access the performance of a fully specified classifier. Hence, in Table 2, a MSE of test set (MSE T. Perf) is suitable for evaluating the goodness of designs, indicating capability of training model against unseen datasets.

Optimization of RC column using AI-based Lagrange method
The neural network models of RC column presented in Table 2 are used to minimize cost index (CI c ) of a RC column under several design requirements as shown in Table 3. All design requirements can be expressed in term of equality constraints c x ð Þ ¼ c 1 x ð Þ; . . . ; c 6 x ð Þ ½ � T as stated in Table 4. Besides, the rebar ratio, ρ s , should only be constrained following the ACI-318 code requirements (ρ s;min � ρ s � ρ s;max ), which are expressed in terms of two inequality constraints: v 1 x ð Þ ¼ ρ s À ρ s;min � 0 andv 2 x ð Þ ¼ À ρ s þ ρ s;max � 0 (Table 4).
In Eq. (19), the CI c function for forward optimization is defined as CI c ¼ f FW CI c x ð Þ, as a function of seven input parameters. According to Eqs. (1) and (3), the Lagrange Table 4. Summary of equality and inequality constraints. Lagrange multiplier of equality and inequality con- respectively,as shown in Eqs. (20) and (21). CI c Lagrangian function: KKT condition: It is well-known that the Newton-Raphson method relies heavily on a good initial vector assumed as x 0 ð Þ ¼ b; h; ρ s ; f c 0 ; f y ; P u ; M u � � T to expedite the run progress as well as enhance accuracy. A good initial vector is predetermined based on simple equality and active inequality; c 2 x ð Þ, c 3 x ð Þ; c 4 x ð Þ; c 5 x ð Þ and v 1 x ð Þ; v 2 x ð Þ (Table 4).

Inequality constraint (v 1 ) is activated
Initial vector when v 1 is activated is obtained from where five input variables (ρ s , f c 0 , f y , P u , M u ) are predetermined based on simple equality constraints c 2 x ð Þ, c 3 x ð Þ, c 4 x ð Þ, and c 5 x ð Þ and active inequality v 1 x ð Þ. The initial vector x 0 ð Þ ¼ b; h; 0:01; 40; 500; 1000; 3000 ½ � T is used to find the saddle point of the Lagrange optimization function, as expressed in Eq. (20), based on the Newton-Raphson method. Unknown input parameters (b, h) for the initial vector are random in large datasets which are to be determined during optimization.
The Newton-Raphson method is implemented to solve partially differentiated Eq. (21) using one initial Lagrange multiplier vector, λ c1 ; λ c2 ½ � ¼ 0; 0 ½ �, and 5 2 initial vectors of x 0 ð Þ ¼ b; h; 0:01; 40; 500; ½ 1000; 3000� T , in which b and h are randomly distributed within a training data range. The initial values of 0 for the Lagrange multipliers are used because they do not have boundaries; they can be any number while the Newton-Raphson model calculates the exact Lagrange multipliers. b; h ½ � ¼ 1070:1; 1070:1 ½ � is the best value among 25 trials based on the network with five hidden layers and 80 neurons, producing an optimal value of 202,275.4 when inequality (v 1 ) is activated, as shown in Table 5(c). The optimized results based on one and two hidden layers and 80 neurons are also shown in Table 5 (a) and (b) with optimal values of CI c ¼ 185, 069.2, and 201,942.3, respectively. Similarly, the optimized results of case 2 (Inequality constraint (v 2 ) is activated) and case 3 (none is activated) are obtained ( Table 5).
The optimized design results obtained using the Lagrange multipliers based on forward networks trained with one, two, and five layers and 80 neurons are listed in Table 5(a), (b), and (c), respectively, and are compared with those obtained using a structural software (AutoCol) (Table 6(a), (b), and (c)). In Table 6(a), the largest error of 11.79% of SF was demonstrated with one layer and 80 neurons. With two and five layers and 80 neurons (Table 6(b) and (c)), reduced errors of 1.55% and 2.7% for the design moment (ϕM n ) and design axial force (ϕP n ) were obtained, respectively, by LMM based on forward networks.

Verification of CI c by large datasets
The goodness of optimal designs presented in Table 5 is evaluated by large datasets as shown in Figure 5. The lowest CI c for large datasets is 195,716, where five million datasets are filtered through f c 0 ¼ 40MPa, f y ¼ 500MPa, SF ¼ 1, and b=h ¼ 1. The accuracy is demonstrated with CI c of 184,684.6 (−5.64%; one layer, Table 6(a)), 195,507.9 (−0.11%; two layers,

P-M diagram
The optimal CI c -based P-M interaction diagrams for RC columns that satisfy various design criteria (Table 4.1(b) and Table 4.2) are plotted on the basis of forward networks, as shown in Figure 6. The P-M diagrams pass through P u and M u , as indicated by a solid point. In Figure 6, the P-M diagrams indicated by Legends 1, 2, and 3 were constructed with the parameters shown in the dashed black box in Table 6(a), (b), and (c), respectively, which were obtained by AI-based Lagrange optimization. These were used to construct P-M diagrams since the  accuracies of the parameters obtained by ANN are acceptable. The P-M diagrams were plotted using AutoCol with appropriate input parameters. The interaction diagrams shown in Figure 6 meet the minimum CI c , as listed in Table  6. CI c was optimized by Lagrange multiplier-based forward neural networks with one, two, and five layers and 80 neurons. The dotted curve cannot pass through P u and M u (solid point), indicating that the training accuracy of a model with one layer is not sufficient. All three optimized P-M diagrams (Legends 1, 2, and 3) shown in Figure 6 would converge, passing through one P-M diagram when the training accuracies are sufficient.

Conclusions
This study presents hybrid optimization techniques, with which objective functions are proposed based on ANNs. Lagrange optimization techniques with constraints were implemented to achieve rational engineering decisions and find minimized or maximized design values by solving nonlinear optimization problems under strict constraints, conditions, and requirements imposed by design codes for the design of structural frames such as columns and beams. This study helps engineers make the final design decisions, not on the engineers' empirical observations but on more rational designs, to meet various design requirements, including code restrictions and/or architectural criteria while the objective parameters, such as cost index (CI c ), CO 2 emissions, and structural weight (W c ), are optimized. The conclusions drawn from the study are as follows: (1) Constructing objective functions and their derivatives, which is a challenge in the Lagrange multiplier method, can now be generalized to any structural design optimization by using ANN to generate universal AIbased objective functions, Jacobian, and Hessian matrices.
(2) Automatic designs of structural frames are proposed for realistic engineering applications to identify design solutions that optimize all design requirements simultaneously, thereby achieving design decisions not based on engineers' intuition. The results of the sensitivity analysis of LMM show that the best optimal values based on constraints can be identified for specific situations such as official design codes.
(3) Analytical objective functions were difficult to obtain. This study developed the objective functions for CI c , CO 2 emissions, and W c based on AI-based networks to be implemented for LMM to find the optimized solutions.
(4) The Karush-Kuhn-Tucker conditions were considered to account for inequality constraints, leading to automatic designs of structural frames that meet various code restrictions simultaneously.
(5) Optimization process for CI c was performed, and negligible errors were obtained, which were verified using large structural datasets. Engineering calculations also validated the design accuracies when the optimized CI c was implemented in the designs.
(6) P-M diagrams were are uniquely designed to optimize columns. The proposed optimization will offer generic designs for many types of structures, including machinery and structural frames.
(7) The AI-based objective functions developed in this study can be implemented in broad areas, including engineering, general science, and economics.
A generalizable optimization method proposed herein can be applied to any optimization problem once a sufficient number of data can be collected for establishing approximated objective functions and other parameters to formulate ANN. New objective functions not only enhance the computational speed compared to conventional software but also produce a generalizable calculation method for Jacobian and Hessian matrices for Lagrange optimization. In future work, comprehensive design optimization of a dynamic design of tall buildings would be performed based on AI-based Lagrange optimization. One concerning problem is that computational time of Lagrange optimization is heavily dependent on quantity of inequality constraints because a number of running case (active inequality) corresponds to combinations of inequality constraints. Likewise, a building design is composed of many design requirements considered as inequality constraints, such as lateral displacement, story drift, design strength of each component, and required total base shears, etc. engineering licenses from both Korea and the USA. Dr. Hong has more than 30 years of professional experience in structural engineering. His research interests include new approaches to construction technologies based on value engineering with hybrid composite structures. He has provided many useful solutions to issues in current structural design and construction technologies as a result of his research combining structural engineering with construction technologies. He is the author of numerous papers and patents, both in Korea and the USA. Currently, Dr. Hong is developing new connections that can be used with various types of frames, including hybrid steel-concrete precast composite frames, precast frames and steel frames. These connections would contribute to the modular construction of heavy plant structures and buildings as well. He recently published a book titled as "Hybrid Composite Precast Systems: Numerical Investigation to Construction" (Elsevier).