Estimating residential buildings’ energy usage utilising a combination of Teaching–Learning–Based Optimization (TLBO) method with conventional prediction techniques

Among the most significant solutions suggested for estimating energy consumption and cooling load, one can refer to enhancing energy efficiency in non-residential and residential buildings. A structure's characteristics must be considered when estimating how much heating and cooling is required. To design and develop energy-efficient buildings, it can be helpful to research the characteristics of connected structures, such as the kinds of cooling and heating systems needed to ensure sui interior air quality. As an important part of energy consumption and demand of buildings, the assessment of cooling load conditions from the envelope of large buildings has not been comprehensively understood yet. In the present paper, a new conceptual system has been developed to anticipate cooling load in the sector of residential buildings. Also, the paper briefly describes the major models of the developed system to maintain continuity and concentrate on the prediction model of the cooling load. To predict cooling load, authors have modelled two methods of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) in conjunction with teaching-learning-based optimization (TLBO). This article aims to illustrate how artificial intelligence (AI) approaches play an essential role in addressing the mentioned necessity and help estimate the optimal design parameters for various stations. The value of the multiple determination coefficient is also determined. The values of the training R2 (coefficient of multiple determination) are 0.96446 and 0.97585 for TLBO-MLP and TLBO-ANFIS in the training stage and 0.95855 and 0.9721 in the testing stage, respectively, with an unknown dataset which is acceptable. The training RMSE values for TLBO-MLP and TLBO-ANFIS are 0.0685 and 0.11176 for training and 0.07074 and 0.12035 for testing, respectively, for the unknown dataset, which is acceptable. The lowest RMSE value and the higher R2 value indicate the favourable accuracy of the TLBO-MLP technique. According to the high value of R2 (97%) and the low value of RMSE, TLBO-MLP can predict residential buildings’ cooling load.


Introduction
As it does across the globe, energy is one of the most crucial issues, which continues to experience industrial expansion and development (Tao, Aldlemy, et al., 2023).In this regard, research into energy sustainability, lowenergy structures, and building efficiency has expanded in recent years, notably in the wake of the energy crisis of the 1970s (Chaiyapinunt et al., 2005;Dalamagkidis et al., 2007;Farhanieh & Sattari, 2006;Marks, 1997;Ochoa & Capeluto, 2009;Pedersen et al., 2008;Reppel & Edmonds, 1998;Sayigh & Marafia, 1998;Singh et al., 2009;Synnefa et al., 2007;Tzikopoulos et al., 2005;Yang et al., 2000).As a result, efficient energy resources are a crucial problem given the rising energy demand brought on by developing technology and growing human requirements (Deng et al., 2023).In Turkey, a significant component of the total energy consumption -about 40% -comes from buildings, particularly residential ones (Aksoy & Inalli, 2006).Therefore, achieving energy efficiency in homes is a pressing need (Li et al., 2023).If not, utilities such as heating, cooling, and lighting systems will correlate to a significant portion of the comfort conditions of interior areas (Khedher et al., 2023).In this scenario, using too much energy will contribute to the warming of the earth, the usage of fuels, pollution of the air, and a significant load on the national economy and consumers.
Traditionally, two main approaches have been used to gauge a building's thermal comfort (Tao, Alawi, et al., 2023).These are the adaptable models derived from field observations and the thermal balance model backed by laboratory investigations (Yao et al., 2009).Artificial intelligence approaches are often used in conjunction with these strategies.The two primary categories of building CL's air conditioning forecast technologies are data-driven and physical simulation.Using the physical simulation technique, software like TRNSYS (Al-Saadi & Zhai, 2015) and ENERGY PLUS (Anđelković et al., 2016) primarily anticipate the cooling demand.However, the approach is much more suited for CL forecast in buildings since utilising the software mentioned above for forecasting necessitates a specified amount of competence from the operator (Nazari et al., 2023).The weather conditions, tenant activities, and intricate relationships between the building's systems will all strongly affect the calculation and simulation software in older structures (Qiang et al., 2015).The data-driven strategy is based on the building's previous operational data (Ansari Manesh et al., 2023).Most recent research on cooling load forecast builds intricate nonlinear correlations among input parameters and cooling loads using artificial neural networks (Deb et al., 2016;Shirvani et al., 2023) and support vector machines (Koschwitz et al., 2018).
Utilising a machine learning strategy, Luo et al. ( 2020) created a multi-objective method for multiple energy usage in new buildings.Three models of support vector systems, long-term and temporary memory neural networks, and artificial neural networks were utilised to forecast the energy usage of the building.The forecast outcomes have shown that the ANN-based forecast methods for building energy consumption had the fewest moderate absolute percentage errors (Adnan et al., 2023).Artificial neural networks, support vector systems, and linear regression were utilised by (Li & Yao, 2020) to create five samples of machine learning for load forecast.The findings demonstrated that the model's predicted cooling load had a standardised mean absolute error and a standardised mean squared error of less than 4%.Four backpropagation neural networks were constructed by (Kim et al., 2020) to examine how input variables like building occupancy and environmental conditions impact building energy usage.The outcomes of predicting the ANN sample utilising the method of Levenberg-Marquardt were found to be more exact when the implementation of the four samples was compared.As previously indicated, the researchers built support vector machines and artificial neural networks-based cooling load prediction samples.The input parameters strongly linked to the prediction models are retained after correlation analysis.Artificial neural networks, however, struggle with local downsizing and slow convergence speed in real-world applications (Shi, 2023).The downside of support vector machines when developing a CL forecast sample is that they analyze data slowly.Researchers have optimised the model structure to address these issues and increase prediction model accuracy (Zhu et al., 2023).Huang and Li (2021) employed the ant colony technique to enhance the neural network to create a load prediction sample.The modified model's moderate absolute percentage error was decreased by 73.28%.The artificial neural network was improved using the elephant swarm improvement approach by (Moayedi et al., 2020).The outcomes demonstrated that the EHO-MLP method might replace the conventional model to forecast building cooling demand.The neural network was optimised by (Zhou et al., 2020) using the particle swarm and artificial bee swarm techniques, respectively.The cooling load prediction samples' accuracy was estimated utilising the coefficient of determination R 2 , mean absolute error MAE and root means square error RMSE.The results demonstrated that the particle swarm and artificial bee colony algorithms might increase the accuracy of the cooling load forecast model.
Additionally, the PSO method outperformed the ABC algorithm regarding the prediction model's performance.The load prediction sample's accuracy may also be somewhat increased by calibration techniques used to adjust the prediction sample.Qiang et al. (2015) used an enhanced multivariate linear regression sample to forecast the typical daily cooling request for an office building.In order to calibrate the starting load forecast findings using the reference day, Sun et al. (2013) identified the most related meteorological data.They used its hourly projections to construct a simple online cooling load prediction sample.Finally, utilising the mistakes from the previous two forecasts, the calibrated load prediction model's accuracy was increased.
ANFIS is often utilised in several technical fields in literature.Using a model with a mean absolute error below 2.2%, Mellit et al. (2009) could predict the daily solar radiation information and mean monthly clearness index in remote places.Subasi et al. (2009) introduced a novel method based on an ANFIS technology for anticipating the crucial element that leads to concrete cracking in the early stages of cement hydration, and they were pleased with the results.Alasha'ary et al. (2009) illustrated the accuracy of ANFIS in forecasting by using a technique that is based on neuro-fuzzy to forecast the temperature of four distinct rooms built with various construction components utilised to construct Australian residential structures.Ying and Pan (2008) used the ANFIS to anticipate regional electricity demand and compared the outcomes with those from other approaches.They discovered that the ANFIS produces more accurate findings than those from other approaches.With a moderate absolute error of 0.03%, Singh et al. (2007) determined that ANFIS was the best prediction approach among several neural networks with varied training functions.An ANFIS model was created by (Das & Kishor, 2009) to forecast the heat transfer coefficient while distilled water is boiling in a pool.Ayata et al. (2007) used simulated data from a package programme to forecast indoor maximum and average air velocities utilising an ANFIS model.An inferential sensor sample employing ANFIS modelling was created by (Jassar et al., 2009) to estimate the average air temperature in space heating technologies.The preceding cases make clear that a variety of artificial intelligence (AI) models were used to make predictions regarding the EPB.Although hybrid and fuzzy logicbased models are still in development, how they might be applied to simulate the HLs and CLs of housing constructions is unknown.Furthermore, it is still difficult to discover comprehensive research equivalent to current soft computing techniques.Additionally, the statistical analysis on the data generated by the models has not yet undergone a thorough evaluation.
Economically and environmentally, creating a realistic method for thermal load modelling is useful.This research aims to provide architects and design engineers with information regarding the cooling loads of energy-efficiently built buildings in light of those mentioned above, predict the cooling and heating load using metaheuristic algorithms and determine the accuracy of these algorithms.Metaheuristic optimisers, such as artificial neural network (ANN) and adaptive neurofuzzy inference system (ANFIS) in conjunction with teaching-learning-based optimization (TLBO), are being assessed to discover whether they can aid in determining the CL.The best approach is offered after the tasks after are compared.Building cooling energy were calculated for the training and test data sets using a finite difference transient state, a one-dimensional heat conduction issue.This study provides a customised technique based on the teaching-learning-based optimization (TLBO) learning paradigm to forecast the cooling load.For this study, the aspects of 768 buildings were gathered.The information is then trained utilising the TLBO-ANFIS and TLBO-ANN.Three performance criteria are used to evaluate the results of these methods, and they demonstrate how well this method predicts the heating demand of residential structures.
The remaining portions of the article are structured as follows: The dataset and case study are both discussed in Section 2. The strategies and procedures that were utilised are described in Section 3. The simulation and numerical outcomes are presented in the following section, and the work is brought to a close in the following section.Tsanas and Xifara (2012) produced the dataset utilised in this study.The factors that distinguish one structure from another include the glazing area distribution, glazing area, and orientation.Eighteen prototype cubes with equivalent materials were used to replicate each structure.To ensure that the materials utilised for per of the 18 components were equivalent for various types of construction, the most recent and most popular components in the building structure business were chosen.Four different glazing types were employed in the design procedure (Figure 1), with percentages of the floor region of 10%, 25%, 40%, and without glazing.

Established database
Furthermore, it was supposed that the structures were in Greece, in Athens.The data consists of 768 specimens, each with eight characteristics (x1, x2, . . ., x8, and y1) that serve as decision factors and are given in Table 1 (Le et al., 2019;Tsanas & Xifara, 2012).This study uses the properties above as decision variables to predict y1 as the cooling demand.Even though the dataset was created through simulation, it is noteworthy that the suggested approaches also work with real-world datasets.
Figure 2 shows the current database's bar chart and the input variables' normalised data range.Bar charts are a type of data visualisation commonly used to display categorical data.They consist of bars of varying     Figure 5 illustrates the Andrews plot description of the input layers and output.The common Andrews plot is organised as a Fourier interpolations series of the coordinates of multi-dimensional data points.Points that are near in some metrics have equivalent Fourier interpolations and, therefore, will tend to gather in the Andrews plot.Thus, the Andrews plot is an informative and graphical tool for collecting and other data analytic issues.The Andrews plot's weakness is that the lowest frequencies' coordinates will be veritably dominant and may give misleading perceptions.Visualising multivariate data is a hard but interesting issue.Scatterplots let us see data demonstrated in three or two dimensions.However, multivariate data visualisation in more than three dimensions is more difficult.Wegman and Shen (1993) discuss various tools of multivariate visualisation.The two most interesting ones are the Andrews plot presented by (Andrews, 1972) and the grand tour presented by (Asimov, 1985).

Methodology
Modelling and forecasting tasks were completed using ANN and machine learning, effective data mining tools  (Haykin, 2009;Moradzadeh & Khaffafi, 2017).To anticipate the load/energy, a linear mapping among the building characteristics and the building's CL was created in this study using MLP, ANFIS, and TLBO as three application samples of these algorithms.Each of the suggested techniques is briefly discussed in the section that follows.

K-Fold cross-validation
Cross-validation is an approach utilised for recognising the performance of a classifier when categorising new task examples.One repetition of cross-validation includes dividing a data sample into two separate subsets: the first is the classifier's training on one subset (train set), and the second is testing the performance of the train set on the other subset (testing set).
The original specimen is partitioned randomly into k subspecies in k-fold cross-validation.From the subspecies of k, a single subspecies is maintained as the validation data to test the classifier.The residual k−1 subspecies are utilised as the training data.The cross-validation procedure is then iterated k times, with each k subspecies utilised once as the testing dataset.The k outcomes of the folds are averaged to produce a single performance evaluation.
Cross-validation is the topic of various studies; three interesting and related outcomes are presented below: • Repeating the cross-validation iterations asymptotically converges to an accurate evaluation of classifier performance (Stone, 1977); • Ten-fold cross-validation is better than leave-one-out validation for selecting the method, and it's also better than other options of k-fold (Kohavi, 1995); • K-fold cross-validation is willing to under-evaluate the classifier's performance (Kohavi, 1995).

Artificial intelligence methods
Multilayer Perceptron (MLP) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) are powerful computational techniques that have demonstrated capabilities in solving complex optimisation problems in the state-of-the-art.
Here are some of their key capabilities: • Nonlinear modelling: MLPs and ANFIS can capture nonlinear relationships between input variables and output responses.This allows them to effectively model and optimise complex systems exhibiting nonlinear behaviour, often in real-world problems.• Universal approximation: MLPs can approximate any continuous function to arbitrary accuracy given a sufficient number of neurons and appropriate training.This property makes MLPs versatile and capable of modelling a wide range of complex optimisation problems.• Adaptive learning: MLPs and ANFIS can automatically adjust their internal parameters, such as weights and biases, through the learning process.This adaptive learning capability allows them to continuously refine their models to improve performance and optimise the objective function.It's important to note that applying MLPs and ANFIS in solving complex optimisation problems depends on the specific problem domain and the availability of appropriate training data.Proper model design, training, and validation procedures are critical to ensuring their effectiveness and accuracy in solving state-of-the-art optimisation problems.

Multilayer perceptron (MLP)
A modelling solution for complicated systems in estimation issues such as engineering, medicine, and finance is an artificial neural network (ANN) (Moayedi & Jahed Armaghani, 2018;Nguyen et al., 2020;Shariati et al., 2021;Yan et al., 2019;Zandi et al., 2018;Zhao et al., 2020;Zhao et al., 2021).The ANN is a data-processing analysis system that resembles the structure and functions of the human brain (Wang et al., 2022).The ANN is a compressed connected multilayer structure including various neurons (Cui et al., 2022).These kinds of networks can identify the likenesses, especially when these networks are presented with new input parameters after exactly predicting the proposed output pattern (Luo et al., 2022).
Each neuron in a layer of the MLP is linked to every other neuron in the layer above and below it (Dai et al., 2023).Figure 6 depicts the architecture of the MLP structure and highlights the nonlinear mapping between the input and output vectors (Moradzadeh & Pourhossein, 2019).Weights link the neurons, and a nonlinear transfer process produces the output signals (Seo & Eo, 2019).
X and Y exist as the input and output signals, correspondingly, in Equation ( 1).The nonlinear transfer function is demonstrated by F, while b and w are the vectors of bias and weight.Given that MLP can be trained to learn, a database with known output and input vectors is needed to train the vector of weight, which is then changed according to the output signals (Thimm & Fiesler, 1997).

The adaptive neuro-fuzzy inference system (ANFIS)
Jang developed the ANFIS, which combines the most beneficial aspects of fuzzy systems with neural networks (Jang, 1992).The structure of ANFIS is made up of ifelse statements, fuzzy input-output data pairs, and neural network learning algorithms.An approach for simulating complicated nonlinear mappings using neural network learning and fuzzy inference techniques is called an ANFIS (Inan et al., 2007).The ANFIS system can function in unstable, loud, and unreliable positions because it merges fuzzy logic and ANN methods (Liu & Ling, 2003).The ANFIS method improves the membership process and the related variable that reaches the target databases using the training process of neural networks (Wu et al., 2009).Because it can use specialist judgment, it generates more exact findings than the mean square error criteria.The algorithm of learning that ANFIS uses is a hybrid algorithm that merges the usage of the least squares method with the back-propagation algorithm.A specimen with two outputs and inputs is considered to simplify the procedure.ANFIS structure is created using five layers.The following list summarises the roles played by each layer: Layer 1: The membership values resulting from the input models and employed membership operations are this layer's outputs' nodes (Rashidi et al., 2022).
The outputs derived from these nodes are given below.To ease the system, y and x are considered to represent the nodes of input, B and A exist as the linguistic tags, and μ Bi and μ Ai are the functions of membership.
Most often, it is assumed that the membership functions, μ Ai andμ Bi , have bell-shaped distributions with maximum and lowest values of 1 and 0, respectively.In Eq. ( 2), where μ i is the centre of a bell-shaped function of membership and σ 1 is the standard deviation; where the premise parameters are a i , b i , and c i (Çaydaş et al., 2009).Layer 2: The mathematical multiplication method determines each rule's firing intensity in this layer.
Layer 3: The firing strengths normalisation is carried out in this layer.The node figures out the ith rule's firing strength concerning all other rules' firing strengths.
Layer 4: Per node in this layer outputs only the normalised firing strength multiplied by a first-order polynomial.The outputs are expressed as described in Eq. ( 6), where F 1 , as well as F 2 , exist as the if-then statements as stated below.Rule 1: If x is A 1 and y is B 1 and etc.; then where the variables directed to the following parameters are linear p, q, and r (Übeyli, 2008).Layer 5: This node adds up all of the signals from the fourth layer to determine the ANFIS's total output.
The ANFIS's output is stated as follows:

ANN and ANFIS parameter selection
Overall, ANN and ANFIS parameter selection often involves a trial-and-error process, experimentation, and a good understanding of the problem domain.Automated approaches like hyperparameter optimisation can also be beneficial in finding optimal parameter values.
The choice of parameters can significantly impact the model's performance, so investing time and effort in this phase of model development is crucial.The selection of parameters in Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) typically involves a process known as training or tuning.
Here's an overview of how parameters are selected for each of these models: be set.

Teaching-learning-base optimization (TLBO)
The control parameters for each swarm intelligencebased and evolutionary optimisation algorithm must be the same, such as the generation's number, population size, the size of the elite, etc. Various algorithms need their specific settings in addition to the standard control parameters.For instance, the GA method utilises the mutation possibility, crossover possibility, and the operator of selection; the PSO method utilises the inertia weight, the cognitive and social variables, the bees' number, and the limit; the ABC algorithm utilises the bees' number; and the NSGA-II needs the mutation possibility, crossover possibility, and the index of distribution.
The proper tweaking of these algorithm-specific variables greatly influences the algorithms' function.When adjusting algorithm-specific parameters incorrectly, the result is either increased computing effort or a local optimum solution production.The work is increased since standard control parameters must be tweaked along with the parameters special to the algorithm.The TLBO method was created in response to the requirement to create an algorithm that doesn't need specific variables.
The TLBO method was developed by (Rao et al., 2011;Rao, Savsani, and Balic, 2012;Rao, Savsani, and Vakharia, 2012) and (Rao & Savsani, 2012) and is based on a teacher's influence on a student's performance in the class.This algorithm outlines two fundamental methods of learning: (1) learning from a teacher (the teacher step) and ( 2) learning through interacting with other students (the learner step).As part of this algorithm, a population of students is taken into account, and the various subjects made available to them are taken into account with various optimisation design parameters.The results of a student are analogous to the optimisation problem's value of 'fitness', and the teacher is seen as the population's finest overall solution.The parameters that comprise the objective function of the presented optimisation issue are the design variables, and the objective function's value determines the optimal solution.
The 'Teacher step' and the 'Learner step' are the two steps in which TLBO operates.The operation of both stages is described below.

Teacher step
The first step of the algorithm is where the instructor teaches students.According to their abilities, the teacher attempts to improve the mean score of the class in the topic being taught during this phase.Consider that there are 'm' topics (i.e.design parameters) for every iteration I 'n' learners (i.e.population size, k = 1, 2, . . ., n), and that.M j,i is the mean outcome of the learners for the subject 'j' (j = 1, 2, . . ., m).The outcome of the best learner kbest may be interpreted as the best general outcome X total−kbest,i , taking into account all the topics taken together.The TLBO algorithm, however, considers the best learner recognised as the instructor since the teacher is often thought of as a highly knowledgeable somebody who instructs students to get higher outcomes.According to the following formula, the difference between the current mean result for each topic and the teacher's matching result for each subject, where X j,kbest,i represents the outcome for the top student in topic j.T F presents the teaching factor that specifies how much the value of the mean will vary, and r i is a random integer between [0, 1].The value of T F may either be 1 or 2. T F values are chosen at random with the same likelihood as, The method of TLBO doesn't use T F as a parameter.The procedure uses Eq. ( 10) to determine the randomly T F value, which isn't provided as input of the algorithm.It has been shown after several trials on numerous benchmark functions that the approach works best when the value of T F is between 1 and 2. Nevertheless, it is discovered that the method's performance is much improved if the T F value is 1 or 2. Accordingly, to simplify the procedure, the T F is advised to take 1 or 2 according to the rounding-up requirements provided by Eq (10).The present solution is changed in the teacher step following the following equation based on Difference_Mean j,k,i .
where, X j,k,i is the updated value of X j,k,i .If X j,k,i provides a superior function value, it is allowed.All of the function values that were approved after the step of the teacher are kept, and these values serve as the input for the step of the learner.The instructor phase affects the student phase.

Learner step
In the algorithm's second stage, students engage with one another to expand their collective knowledge.To advance knowledge, a learner engages in random interactions with other learners.If the other student is more knowledgeable than the learner, the learner gains new information.The learning phenomena of this stage are described below using the 'n' population size as a reference.
Randomly choose Q and P as the two students such that X total−P,i = X total−Q,i I (where, X total−P,i and X total−Q,i are the values of the updated function of P and Q's respective X total−P,i and X total−Q,i after the teacher phase) X j,P,i = X j,P,i + r i (X j,P,i − X j,Q,i )ifX total−P,i > X total−Q,i (12) X j,P,i = X j,P,i + r i (X j,Q,i − X j,P,i )ifX total−Q,i > X total−P,i (13) If X j,P,i provides a superior function value, it is acceptable.
Equations ( 12) and ( 13) deal with minimisation issues.Eqs. ( 14) and ( 15) are utilised to maximise issues.X j,P,i = X j,P,i + r i (X j,P,i − X j,Q,i )ifX total−Q,i > X total−P,i (14) X j,P,i = X j,P,i + r i (X j,Q,i − X j,P,i )ifX total−P,i > X total−Q,i (15) A population-based algorithm called teaching-learningbased optimization (TLBO) replicates the teachinglearning procedure in a classroom.This method doesn't need any control variables of algorithm-specific; it simply needs standard control variables like population size and generational length.
In summary, Teaching-Learning-Based Optimization is a population-based optimisation technique that simulates the teaching and learning processes in a classroom to improve solutions to optimisation problems iteratively.It's a relatively simple but effective approach and can be used in various domains where optimisation is required.TLBO has been applied to various optimisation problems, including mathematical optimisation, engineering design, and machine learning model tuning.It's known for its simplicity and ability to converge to good solutions, especially for continuous optimisation problems.However, it may not always outperform more advanced optimisation algorithms on complex problems.
As stated, the TLBO is a population-based optimisation algorithm inspired by the teaching and learning processes observed in a classroom.It includes different steps, such as Initialization: Start with an initial population of potential solutions (individuals or learners).These solutions are treated as students in a classroom.Teaching Phase: Each student evaluates their fitness or performance concerning the problem being solved.The best-performing student (teacher) among the population is identified based on their fitness.Also, the teacher guides and influences the other students to improve their understanding (solutions) by sharing their knowledge or solutions.Learning Phase: Students (other than the teacher) update their solutions based on a combination of their understanding (previous solution) and the guidance provided by the teacher.This is akin to students learning from the best-performing students in a classroom.The learning process incorporates randomness, allowing for the exploration of different solutions.Update Population: After the teaching and learning phases, the population is updated with the new solutions.The bestperforming solution found so far is retained as the best solution.Termination Criteria: The algorithm repeats the teaching and learning phases for several iterations or until a termination condition is met (e.g. a satisfactory solution is found).Output: The final solution obtained after the algorithm terminates is considered the optimised solution to the problem.

Results and discussion
The cooling and heating demand may be predicted using the MLP, ANFIS, and TLBO networks.A dataset was used as the training input per network.Each network needed a preliminary design in the first step to establishing the neurons' number of a hidden layer and the network coefficients.The quantity of train and test data for each network is established after its design.To verify the training phase per network, 80% of the models in this study were used as training data (4-fold) and 20% as test data (1-fold).

Accuracy indicators
The outcomes of any ANN algorithm need to be assessed after training and testing.To achieve this, statistical indicators of performance like the coefficient of determination (R 2 ) and root mean square error (RMSE) may be used.The following formulae (Choubin et al., 2016) are used to compute each of the indices above.
where S iobserved and S ianticipate represent, respectively, the actual and anticipated CL values of the green residential building.The parameters U and sObserved denote the total occurrences number and the mean of the CL actual values.Using the enhanced data set, machine-learning models were built in the environment of Weka software.The results of this procedure are given in the section that follows.

Incorporated FIS and MLP with TLBO optimizer
The calculated ANFIS and MLP mathematical equations were presented to the TLBO as the primary issue.This part will assess how the validation and training dataset's size was chosen for the cross-validation procedure.As new validation and training sets are picked randomly from the 4 folds of initial training sets before going via the validation procedure, the k-fold 5 testing dataset is left unmodified and utilised to evaluate the forecast execution of various methods.The training and the validation set's population size are split into the following numbers: 50, 100, 150, 200, 250, 300, 350, 400, 450, and 500.In order to give each network a fair chance of reducing error, 1000 repetitions of each network were used to implement it.The results of the operation above are ten convergence curves, shown in Figure 7. Choosing the predictor variables and building the model remain the same, but the latest validation and training sets are utilised separately to replace the initial training and validation set. Figure 7 displays the prediction effectiveness of models founded on MSE value utilising training and validation sets with various sample sizes.This graph shows that the TLBO-MLP method yields the most accurate results since it has the lowest MSE value.
The performance metrics of the TLBO-ANFIS and TLBO-MLP samples with ten population dimensions for forecasting cooling loads in buildings employing train and test data are shown in Tables 2 and 3.These models produced consistent results with R 2 values between 0.96 and 0.97 and RMSE values between 0.06 and 0.12.With R 2 of (0.97585 and 0.9721) and RMSE of (0.11176 and 0.12035) in the train and test steps, 250 is the ideal population size for the TLBO-ANFIS.With population dimensions 350, the TLBO-MLP method has the greatest R 2 of (0.96446 and 0.95855) and the most inferior RMSE of (0.0685 and 0.07074) in the train and test steps.The findings demonstrate that the TLBO-MLP method with the lower RMSE value performs better and is more accurate at estimating cooling demand.
Figures 8-11 display the TLBO-ANFIS network's training and testing phases' excellent correlation coefficient among the true and anticipated values.For the TLBO-MLP network in the training step, Figure 8 indicates a great coefficient of determination between the true and the predicted values.
It is evident that per of these networks has completed the training step, given the high correlation between the goal data and the per network's output (as indicated in Figures 8-11 training MAEs of 0.078967 and 0.050435 and testing MAEs of 0.08107 and 0.051696 demonstrate the TLBO-MLP method's highest accuracy due to its lower error. Table 4 provides a performance assessment of the suggested approaches regarding R 2 and RMSE.Table 2's findings show that the prediction of the cooling load using the TLBO-MLP approach, which had the greatest R 2 value (0.96446) and fewest mistakes in the shape of RMSE, was the best forecast (0.0685).The TLBO-ANFIS technique was likewise associated with higher RMSE error of forecast values in the cooling load prediction.
Figure 14 shows the Taylor diagram for the current database.Taylor diagrams (Taylor, 2001) present a graphical summary of how observations and a pattern (or a pattern' set) are closely matched.The resemblance between the two patterns is quantified regarding their centred root-mean-square difference, correlation, and their variations' amplitude (presented by standard deviations).Taylor diagrams help appraise multiple features of complex approaches or gauge the comparative skill of various methods (Smithson, 2002).
The statistical variables are presented in a Taylor Diagram, a fundamental graphical tool to perform a comparative evaluation of the TLBO-ANFIS and TLBO-MLP methods regarding the actual database.The Taylor Diagram illustrates a substantial statistical numerical analysis containing the standard deviation between the predicted and original values utilising TLBO-ANFIS and TLBO-MLP methods.Based on Figure 14, the TLBO-ANFIS and TLBO-MLP methods correlate with the correlation coefficient (R 2 ) values of 0.97585 and 0.96446 for training and 0.9721 and 0.95855 for testing, respectively.This is also supported by statistical parameters such as overall RMSE, where the TLBO-ANFIS model is 0.11176 and 0.12035.In contrast, the TLBO-MLP model is 0.0685 and 0.07074 in the training and testing phases.

Discussion
A residential green building's comparatively compactness, general height, surface space, roof area, wall space, orientation, glazing area, and distribution are eight criteria determining a building's cooling load.This research used two distinct methods, TLBO-ANFIS and TLBO-MLP, to forecast the intensities of cooling loads in residential structures.More evolutionary techniques will be used for predicting the energy of buildings in the future; however, machine learning is a study field that is rapidly increasing.A relevant domestic source of energy data for subsequent research on other strategies will be the yearly residence cooling load intensity database created in the current study.The construction space CL intensities data used in this investigation were produced using a MATLAB simulation.The suggested method, nevertheless, is equally relevant to the cooling demand for   building operations.The gathering of precise building energy usage data at a fine scale will be possible with the growing usage of smart metres and the Internet of Things (IoT), even if it is now difficult to get enough building operating energy consumption information.
Based on the size and features of the structure, the established technique can provide an accurate estimate of the required cooling load for a prospective building project.The simulations could be helpful to engineers and building owners when designing HVAC systems.Modifying structural design and architecture depending on input parameters is another early-stage support method for reconstruction projects.As a result, it is also possible to analyze each input parameter's impact separately to comprehend the thermal load's behaviour.The TLBO-MLP accurately predicts the trend despite being neither predictable nor consistent.As a result, this method might produce accurate models of real-world structures.
The TLBO-ANFIS and TLBO-MLP are two optimisation algorithms combined with machine learning techniques commonly used for predicting cooling loads in smart buildings.While these approaches have their advantages, they also have certain limitations.Here are some limitations of TLBO-ANFIS and TLBO-MLP in predicting cooling loads: (1) Data availability and quality: The performance of both TLBO-ANFIS and TLBO-MLP heavily relies on the availability and quality of training data.Insufficient or inaccurate data can lead to suboptimal predictions and reduced accuracy.nodes, and learning rates.Determining the optimal configuration can be challenging, requiring expertise and iterative experimentation.(8) Lack of adaptability: TLBO-ANFIS and TLBO-MLP models may have limited adaptability to real-time changes in building conditions or dynamic operating scenarios.They are typically trained on historical data and may be unable to adapt quickly to sudden changes or unforeseen events.
It's important to note that while TLBO-ANFIS and TLBO-MLP have these limitations, they can still be valuable tools for cooling load prediction in smart buildings.However, it's crucial to be aware of their constraints and consider alternative approaches or techniques when necessary.

Conclusions
Forecasting buildings' heating and cooling loads have become more difficult due to the significance of energy conservation and its management.To boost the accuracy of their predictions of CL, most researchers in this  subject provide a variety of methodologies and models.This article suggested TLBO-ANFIS and TLBO-MLP approaches to forecasting the cooling of a residing structure through machine learning standards.The major goal of these techniques was to improve prediction accuracy by establishing a linear mapping among the input and output parameters.The cooling load was employed as the output variable per network within the training phase after the technical specifications of a residential building were employed as inputs during the creation of each of the suggested models.The trained networks were tested, and cooling load predictions were made using new, anonymous data.Finally, the cooling load projections may be accurately provided by each trained network.The TLBO-MLP predicted the cooling load with the greatest R 2 , i.e. 0.96446 and 0.95855, and the lowest RMSE, i.e. 0.0685 and 0.07074, shows the best performance in predicting cooling load.Also, the TLBO-ANFIS approach with the R 2 of 0.0.97585 and 0.9721 and RMSE of 0.11176 and 0.12035 shows a good accuracy level.Despite some of their shortcomings, TLBO-ANFIS and TLBO-MLP can be useful tools for predicting cooling load in smart buildings.In light of the limitations of the research, potential ideas for future projects were also presented, including data improvement and future project selection, optimising building characteristics using the model, and contrasting the model with improved time-saving techniques.The TLBO-MLP methodology was presented for use in real-world scenarios.Zhou, G., Moayedi, H., Bahiraei, M., & Lyu, Z. (2020).Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings.Journal of Cleaner Production, 254, 120082. https://doi.org/10.1016/j.jclepro.2020. 120082 Zhu, H., Sun, Q., Tao, J., Sun, H., Chen, Z., Zeng, X., & Soulat, D. (2023).Fluid-structure interaction simulation for performance prediction and design optimization of parafoils.

Figure 1 .
Figure 1.The preparation of data with a graphical view.

Figure 2 .
Figure 2. The normalised variables range with a graphical view.(a) Relative compactness, Surface area, Wall area, and Roof area; (b) Overall height, Orientation, Glazing area, and Glazing area distribution.
are commonly used in data analysis and statistics to visualise the frequency distribution of a dataset.Histograms consist of bars, each representing a range of measured variable values.The height of the bar represents the frequency or count of observations that fall within that range.The bars are typically drawn adjacent to each other, with no gaps between them, to emphasise the continuity of the data.Histograms can be created using various software tools or programming languages, such as Excel, Python, R, or MATLAB.The target values (cooling load) are classified into three classes.Class I ranges between12.38and 23.3, class II ranges  between 23.3 and 35.66, and class III ranges between  35.67 and 48.04. Figure 3(a)  shows the variation of surface area, relative compactness, wall area, and roof area.
Figure 3(b) shows the orientation variation, glazing area distribution, overall height, and glazing area variables.Figure 4 also shows the variation of input variables two by two.

Figure 3 .
Figure 3.The variation of input variables.

Figure 4 .
Figure 4.The variation of input variables.

Figure 5 .
Figure 5.The input layers and output's Andrews plot description.

•
Parallel processing: MLPs and ANFIS can be trained and executed parallel, taking advantage of modern parallel computing architectures and technologies.This enables efficient processing of large-scale optimisation problems and accelerates the optimisation process.• Robustness to noise and uncertainty: MLPs and ANFIS can handle noisy and uncertain data by learning from the available information and generalising patterns.They can effectively deal with incomplete or imperfect data, making them suitable for optimisation problems where the input data may have inherent uncertainties.• Multi-objective optimisation: MLPs and ANFIS can be extended to handle multi-objective optimisation problems, where multiple conflicting objectives must be simultaneously optimised.Various techniques, such as incorporating multiple outputs or incorporating evolutionary algorithms, can be used to tackle multi-objective optimisation tasks.• Feature extraction and reduction: MLPs and ANFIS can automatically extract relevant features from complex input data, reducing the problem's dimensionality.This feature extraction capability helps identify important patterns and reduce the optimisation task's computational complexity.• Online and real-time optimisation: MLPs and ANFIS can be trained and deployed in online or real-time optimisation scenarios, where the optimisation process continuously adapts to changing conditions and dynamic environments.This makes them suitable for applications that require adaptive and responsive optimisation.

Table 1 .
Input and output data of the research.
Common methods include random initialisation or Xavier/Glorot initialisation.(iii) Training Algorithm: You select an optimisation algorithm like gradient descent, stochastic gradient descent (SGD), Adam, or others to update the weights and biases during training.(iv) Loss Function: Choose an appropriate loss function that quantifies the difference between the predicted outputs and target values.Common loss functions include Mean

Table 2 .
The results of the network for the TLBOANFIS with different population sizes.

Table 3 .
The results of the network for the TLBOMLP with different population sizes.

Table 4 .
The network results for the TLBOANFIS and TLBOMLP.