Approximating heat loss in smart buildings through large scale experimental and computational intelligence solutions

The attainment of energy sustainability in the building sector can be realised by implementing a green building programme, which has grown significantly over the last thirty years. Green building is considered a technical and management strategy within the building and construction industries. Many different prediction methods, both complex and simple, have been put out in recent years and used to solve a wide variety of issues. Several case studies have highlighted factors that impede energy and resource usage in green buildings. The utilisation, trends, and consequences of wall and thermal insulation materials are examined. The main scope of this investigation is to predict buildings’ heat loss by applying artificial neural networks according to the heat transfer coefficients of walls and coating materials, as well as indoor, outdoor, and external surface temperatures. The data has been normalised and presented to two selected neural networks (Harmony search (HS) and particle swarm optimisation are used and contrasted (PSO)). For evaluating the accuracy of models, two statistical indexes are used (R2 and RMSE). Model performance of PSO-MLP is shown by R2 amounts of 0.97055 and 0.87381, respectively, and RMSE amounts of 0.02534 and 0.09685. Similarly, HS-MLP model accuracy is also indicated by R2 amounts of 0.93839 and 0.84176 and RMSE amounts of 0.03635 and 0.10753. The analysis in this paper shows that PSO-MLP predicts heat loss with higher accuracy and improved performance.


Introduction
Buildings in Europe are responsible for 36% of all CO2 emissions and 40% of all energy usage (Bemani et al., 2020;Recast, 2010).Predicting the energy usage amount is essential in buildings to optimise energy performance and reach energy protection (Chen et al., 2022;Faroughi et al., 2020).However, due to the wide variations in energy kinds and building types, the energy system in structures is highly complicated (Tao et al., 2023).Cooling load, heating load, hot water, and electricity are the primary energy sources considered in the literature.The three building kinds most usually considered range from little apartments to vast estates: office, residential, and technical structures (Moayedi, Yildizhan, Aungkulanon, et al., 2023).The weather, particularly the temperature, construction of buildings and the physical components' thermal features used, the occupation and how CONTACT Nima Khalilpoor nimakhalilpoor@gmail.com they behave, all have an impact on a building's energy behaviour (Khosravi et al., 2023).The intricacy of the issue makes accurate consumption forecasts challenging (Yang et al., 2023).Many different prediction methods, both complex and simple, have been put out in recent years and used to solve a wide variety of issues.This investigation work has been done in the design, operation, or retrofit of modern structures, ranging from local, state, or federal modelling to examining the building's subsystems.Predictions may be made for the whole structure or individual sub-level elements by carefully examining each affecting aspect or by estimating the use by considering several important variables (Faroughi et al., 2020).
As early as the early 1990s, investigators created various modelling techniques to forecast building energy requirements (Adnan, Dai, Kuriqi, et al., 2023;Adnan, Mostafa, et al., 2023).Engineering, AI-based, and hybrid methods may be used to categorise these technologies further (Foucquier et al., 2013).The engineering technique, a building component's energy utilisation or the complete building, determines energy usage by utilising thermodynamic equations to model the systems' physical treatment and their relevance with the environment (Zhao & Magoulès, 2012).The fundamental logic of this strategy is known as the 'white box,' thus the name (Moayedi & Khasmakhi, 2023).The AI-based strategy, which differs from the engineering approach, is known as the 'black box' since it forecasts energy usage without understanding the fundamental relationships between the building and its constituent parts.The 'grey box' hybrid technique compounds the black-box and white-box approaches to get around each approach's drawbacks.Grey-box and white-box techniques are timewasting and need laborious expert effort for method creation.They both require precise architectural data to simulate the underlying relations to forecast energy usage (Shi, 2023).Using these two techniques for available structures' analysis and their energy consumption becomes laborious because, if not impossible, it may be challenging to precisely gather information on the mechanical and building envelope specifications, preventing their widespread application to the existing building stock (Lin et al., 2011).The modern AI-based approaches are covered in-depth in this work for forecasting energy in buildings, while a full evaluation of tools for predicting energy usage is accessible (Foucquier et al., 2013).A single prediction technique using a single learning algorithm and an ensemble prediction approach combining several single prediction techniques to increase prediction accuracy is investigated and contrasted (Ikram, Hazarika, et al., 2023).
Building energy usage is predicted using an AI-based algorithm based on associated variables, including environmental factors, building attributes, and occupancy levels (Xu et al., 2020).AI-based algorithms have been utilised in predicting building energy usage due to their effectiveness in making predictions (Zhang et al., 2021).The AI-based approaches and other forecast techniques for building energy usage have been compared in earlier research.To illustrate: Turhan et al. (2014) contrasted the Back Propagation Neural Network (BPNN) with KEP-IYTEESS to indicate the green buildings' heating load; Neto and Fiorelli (2008) contrasted EnergyPlus with Artificial Neural Network (ANN), for anticipating energy usage in the building; and more.This research showed that AI-based systems are the most effective for predicting the energy usage of existing building stock since they have benefits over-engineering and hybrid techniques like model simplicity, computation speed, and learning capacity (Kannan & Neeharika, 2007).AI model creation is quick because of its simple architecture and experimental data-collecting requirements (Adnan Ikram et al., 2023;Ikram, Dehrashid, et al., 2023;Yang et al., 2022).For example, the consecutive processes of the software structure cause energy simulation engines like Energy-Plus, which can model complex structures, to operate relatively more slowly than AI-based methods.For example, the space temperature is upgraded hourly, utilising feedback from the HVAC module.Furthermore, using time series data, AI-based methods may forecast future energy usage behaviour (Xiao et al., 2022).In contrast, energy modelling software uses a more traditional forward approach and provides energy prediction for known structures on a yearly, monthly, hourly, or 15-minute basis (Ikram, Dehrashid, et al., 2023).In contrast to simulation methods of energy in buildings, AI-based models have the significant advantage of requiring a small variables' number that accurately describe the function of the building as a system (J.Wang, Tian, et al., 2022).
The information is set up in the following manner.Section 2 evaluates current research by categorising it following the model in use.The discussion and areas for further study are included in Section 3, and our findings are covered in Section 4.

Experimental plan
Because of rising energy costs and climate changes, it was necessary to decrease the consumption of energy resources based on carbon (Nabipour et al., 2020).Among the most successful strategies in this respect is installing thermal insulation on structures.Using less energy for building heating, cooling, and insulating heat, CO2 emissions are reduced.This study used experimental research to evaluate the impact of different thermal insulation materials on the emission of CO 2 in the climate of Hakkari.The function of energy in various kinds of walls (aerated concrete, red brick, briquette, and reinforced concrete), along with their impacts on CO2 emissions, were evaluated using various insulating compounds (plaster, aerogel, xps, and eps) for this goal (Figure 1).

Established database
The research's outcome parameter is heat loss.The research's input variables include the wall's and coating material's heat transfer coefficient, the room, outside, and the external surface temperature.The distribution input and output layers are illustrated in Figure 2. Figure 3 demonstrates four graphs that show the variation of input

Methodology
This study employs two methods, namely harmony search (HS) and particle swarm optimisation (PSO) in conjunction with artificial neural networks (ANN) to estimate heat loss in green buildings.As previously mentioned, the aim of this study is to determine the extent of heat loss in such structures.In light of the fact that HS, PSO, and ANN have been extensively discussed in previous literature, the present study includes supplementary descriptions of these concepts.This study aimed to create a novel hybrid technique, specifically HS-ANN and PSO-ANN.Consequently, this section provides an overview of PSO and HS for contextualisation purposes.Figure 5 provides an elaborate depiction of the study's particulars.

Artificial neural network
The most popular artificial intelligence models for predicting building energy usage are ANNs (Qasem et al., 2019).This model is a successful strategy for this challenging application since it is proficient at tackling non-linear issues (Adnan, Dai, Mostafa, et al., 2023).Researchers used ANNs to analyse different kinds of building energy usage in various situations over the past 20 years, including cooling, heating, electricity usage, sub-level components' optimisation and operation, and determination of consumption variables (Gu et al., 2023;Moayedi, Yildizhan, Al-Bahrani, et al., 2023;Zheng & Yin, 2022).We discuss the prior research in this part.Figure 6 depicts a schematic depiction of an ANN structure.
The ANNs' use in buildings' energy problems was briefly reviewed by Kalogirou (Kalogirou, 2006) in 2006.Backpropagation neural networks were utilised by Kalogirou et al. (1997) to forecast the necessary heating demand for buildings.The data from 225 buildings, ranging from tiny areas to huge rooms, were used to train the model.A similar technique was used by Ekici and Aksoy (2009) to forecast the heating loads of buildings in three structures.The yearly heating requirements of many modest single-family houses in northern Sweden were projected by Olofsson et al. (1998).Later, Olofsson and Andersson (2001) created a neural network with a high forecasting rate for single-family houses that predicts long-term energy consumption (the yearly heating requirement) based on short-term (usually 2-5-week) observed data.
To forecast cooling demand in a structure, Yokoyama et al. employed a BPNN (Yokoyama et al., 2009).In their study, the identification of model parameters was suggested using a global optimisation technique termed the modal trimming approach.Using hourly energy use data and a recurrent neural network, Kreider et al. (1995) could forecast future building heating and cooling requirements using just the current time stamp and weather.Ben-Nakhi and Mahmoud (2004) projected the cooling demand of three office buildings utilising the same recurrent neural network.For model training, a database of the cooling load was utilised (from 1997 to 2000), and for testing the method, a database from 2001 was utilised.To anticipate a passive solar building's energy usage without the need for mechanical or electrical heating systems, Kalogirou (Kalogirou & Bojic, 2000) employed neural networks.Cheng-wen and Jian (2010) employed a BPNN to estimate the cooling and heating load of buildings in distinct climatic areas defined by cooling and heating degree per day, considering the impact of weather on energy usage in various locations.These two energy measurements served as training inputs for the neural network.
A lot of the time, for HVAC systems, ANNs are also utilised to study and improve the behaviour of sub-level components (Kargar et al., 2020;Shabani et al., 2023).The prediction of air-conditioning load in a building by Hou et al. (2006) is crucial for the HVAC system's best management.A broad regression neural network was utilised by Lee et al. (2004) to identify and diagnose issues with air-handling equipment in a building.According to Aydinalp et al. (2002), the neural network may be utilised to predict the energy usage of appliances, lights, and space cooling in the Canadian residential sector.It is also a useful model for estimating the influence of socioeconomic variables on this usage.In their further research, neural network models were created to accurately predict household hot-water heating and space heating energy usage in the same industry (Aydinalp et al., 2004).
The number of hidden layers and the number of neurons in each layer are two critical design parameters that can significantly impact the performance of an ANN (Moayedi, Canatalay, Ahmadi Dehrashid, et al., 2023).Here's how they each play a significant role (Zhang et al., 2023): The number of Hidden Layers: The number of hidden layers determines the complexity of the ANN's architecture.A shallow neural network with one or two hidden layers may be sufficient for simple problems, but for more complex problems, a deep neural network with multiple hidden layers may be required.Deep neural networks can learn hierarchical representations of data, which can help them capture more complex relationships between input features and output variables (Du et al., 2023).
However, increasing the number of hidden layers can also make the network more difficult to train, as vanishing or exploding gradients can occur during backpropagation.To mitigate this problem, various techniques,     Number of Neurons: The number of neurons in each layer determines the capacity of the network to learn complex patterns.Many neurons can help the network model complex nonlinear relationships between input features and output variables.Still, it can also lead to overfitting if the network is improperly regularised.
On the other hand, a small number of neurons can limit the network's capacity to learn complex patterns, leading to underfitting.Therefore, the number of neurons should be chosen carefully based on the problem's complexity and the dataset's size.Table 1 shows the outcomes of the neural network.

Hybrid model development
Three performance metrics, namely, root-mean-squared error (RMSE), determination coefficient (R 2 ), and mean absolute error (MAE), were employed to evaluate the quality of the HS-MLP and PSO-MLP samples (R 2 ).
where n denotes the occurrences' number and ȳ, y i , and ŷi are taken to represent the response variable's average, estimated, and modelled quantities, respectively.

Harmony search (HS)
Each solution in the fundamental HS method is referred to as a 'harmony' and is defined as an n-dimensional real vector (Geem et al., 2001).Harmony memory stores a randomly generated starting harmony vector population (HM).Then, utilising a rule of considering memory, modifying pitch, and accidental re-initialisation, a recent volunteer harmony is created from each of the explanations in the HM (Lee & Geem, 2004).The HM is finally upgraded by contrasting the new volunteer harmony with the HM's worst harmony vector.If the new volunteer vector is superior to the weakest vector of harmony in the HM, it will take its place.Until a specific termination criterion is satisfied, the procedure above is repeated.Here's a flowchart of the Harmony Search algorithm (Haghshenas et al., 2021): (1) Initialise the harmony memory with random solutions.( 2 (5) Update the harmony memory by replacing the worst solution with the new solution if it is better.(6) Repeat steps 3-6 until a stopping criterion is met (e.g. the maximum number of iterations or convergence).( 7) Return the best solution found in the harmony memory.

Issue initialisation and variable of algorithm
The overall solution to the global optimisation issue is as bellows: The objective function is min f (x) st: x(j) ∈ [LB(j), UB(j)], j = 1,2, . . ., n, the set of design parameters is x = (x(1), x(2), . . ., x(n)), and the number of design parameters is n, and the upper limit and lower limit for the design parameter x(j) are LB(j) and UB(j), respectively (Shaffiee Haghshenas et al., 2022).
The harmony memory size, also known as the several vectors of solution in harmony memory (HMS), rate of pitch adjusting (PAR), distance bandwidth (BW), consideration rate of harmony memory (HMCR), as well as several improvisations, are the HS algorithm (NI) parameters.All function evaluations and the NI are the same.Selecting the right parameters will improve the network's capacity to find the global optimum or an area close to it with a high convergence rate.

Initialise the harmony memory (HM)
HMS harmony vectors make up the HM.Let X i = {x i (1), x i (2), . . ., x i (n)} denote the ith harmony vector, which is created at random according to x i (j) = LB(j) + (UB(j) _ LB(j)) × r, where r exists as a uniform accidental number among 0 and 1, i = 1,2, . . ., HMS and j = 1,2, . . ., n.The HMS harmony vectors are then added to the HM matrix in the manner described below: (4)

Devise a new harmony
Three criteria are used to devise a new harmony vector, X new : considering memory, a pitch adjustment, along with a haphazard choice.An accidental number is first produced in the [0, 1] range.The memory consideration generates the decision variable x new (j) if r 1 is smaller than HMCR; else, a random selection is used to produce x new (j) (that is, accidental re-initialisation among the search bounds).x new (j) is chosen from any i vector of harmony in {1, 2, . . ., HMS} for the memory concern.Second, if a decision variable is upgraded by considering the memory, it will be pitched adjusted with a possibility of PAR.Following is the pitch adjustment rule: where r is an integer generated uniformly randomly between [0, 1].

Update harmony memory
The harmonic memory is upgraded through the fittest competition's survivorship among the most recent vector of harmony, X new , and the previous vector of harmony, X W , in the HM.If X new 's fitness value is higher than X W 's, X new will take the position of X W and join the HM.

Particle swarm optimization (PSO)
PSO is an algorithm that draws inspiration from the behaviour of particles and social creatures like birds and fish.PSO is an accidental optimisation approach that Eberhart and Kennedy devised and refined (Eberhart & Kennedy, 1995).It was even categorised as a heuristic method.The initial aim of the PSO method is to modify social information trade between people in a group.Each one treats as a particle inside the population.Then they do a search process in a search area (B.Wang, Rahbari, et al., 2022).They exchange wisdom and expertise to update better places while searching (Nguyen et al., 2020).As a result, it was also regarded in the statistical field as an evolutionary computing approach (Armaghani et al., 2014;Gordan et al., 2016;Moayedi et al., 2019;Moayedi et al., 2020;Nguyen et al., 2020;Yang et al., 2019).The PSO algorithm uses five phases to provide the best possible search (Mikaeil et al., 2018): -Step 1: Establish the particle velocity and native population in step 1.The next step is calculating the particle's fitness and identifying the local and global optimum locations.-Step 2: With the starting velocity determined in Step 1, each particle travels radially in the search space.The speed is determined by both regional and international best.The best outcome corresponds to the local greatest per loop, and the greatest particle's position so far corresponds to the global greatest.
Alternatively, the velocity is modified in this phase to match the local best and global greatest per loop (Guido et al., 2022).As said, it is: where r 1 and r 2 represent integers in the [0,1] interval, x (i) j signifies the location of the particles, v (i) j represents the jth particle's speed at the ith repetition, w denotes the inertial weight's coefficient, and i is the repetitions' number (Guido et al., 2020).
-Step 3: The particles move at the upgraded velocity within the search space once the new velocity has been estimated and upgraded.Each position's fitness is assessed and adjusted accordingly using a fitness function.-Step 4: Update global and local greatest for the best situation with the lowest RMSE in step 4. The most recent local situation might be: -Step 5: Verify that your search was successful.Stop the search if the particle has the highest fitness (lowest RMSE).If not, go to step 2 again.

Results and discussion
In the current step, 80% of the total database was accidentally chosen to create models for predicting heat loss from buildings, while the remaining 20% was utilised to test the models' performance.It should be noted that all models employed identical testing and training database and resampling approaches.To accurately predict heat loss from green buildings, it is important to consider the viability of the suggested HS-MLP and PSO-MLP models.The HS and PSO algorithms were integrated with the ANN model.Before completing the ANN model optimisation, the HS and PSO method parameters were ideally determined.The procedure of looking for and optimising for the ANN's variables was carried out once the HS and PSO algorithm's parameters had been specified.Two sophisticated computer techniques are devised to forecast the heat loss of residential structures.The method is initially trained to utilise the training dataset.According to the influential variables in buildings' training and testing datasets, Figures 8 and 9 illustrate how well the proposed model predicts heat loss.
The calculated R 2 values, however, show that over 70% of goal and output heat losses are consistent.Additionally, for sample sizes of 100 and 250, the estimated training R 2 s (0.93839 and 0.97055 for HS-MLP and PSO-MLP, respectively) and testing R 2 s (0.84176 and 0.87381 for HS-MLP and PSO-MLP, respectively), demonstrate the excellent accuracy of the models in forecasting the HL.Following the creation of the heat loss predicting models, the model's performance was assessed using observations from the testing dataset using the R 2 and RMSE, as presented in Tables 2 and 3.The Total Score Ranking (TSR) method was also used to assess the models.
The Total Score Ranking Method is a technique used to rank items, participants, or candidates based on their cumulative scores in various categories or criteria.This method is often used in competitions, evaluations, or decision-making processes where multiple factors contribute to each item or individual's overall performance or value.
Here's a step-by-step guide to implementing the Total Score Ranking Method: Identify the criteria: Determine the factors or categories you will use to evaluate and compare the items or participants.
Assign weights to the criteria (optional): If some criteria are more important than others, you can assign weights to each criterion to give them different levels of importance in the overall ranking.If all criteria are equally important, you can skip this step.
Score each item or participant: For each criterion, assign a score to each item or participant.The score can be a numerical value, a letter grade, or any other meaningful assessment (Li et al., 2023).
Calculate the weighted scores (if applicable): Multiply each score by the corresponding weight for the criterion.Then, sum the weighted scores for each item or participant.
Calculate the total scores: Add the scores (or weighted scores) for each criterion to get the total score for each item or participant.
Rank the items or participants: Sort them in descending order based on their total scores, with the highest total score ranked first, and the lowest total score ranked last.
The Total Score Ranking Method provides a quantitative approach to comparison and decision-making.However, it's important to note that this method is only as accurate and reliable as the criteria and scoring system.Careful consideration should be given to selecting criteria and assigning scores to ensure a fair and meaningful ranking (Gu et al., 2022).
The findings in Tables 2 and 3 supported the complete prediction of the ANN approaches suggested in this work for the heat loss of domestic structures.Having an RMSE of 0.02534, an R 2 of 0.97055, and a rank of 38, the suggested PSO-MLP sample was determined to be the most effective ANN approach for forecasting the heat loss of residential structures.
The HS-MLP model findings acknowledged the substantial optimisation capacities of the HS method in this research (RMSE = 0.02625 and 0.10753, R 2 = 0.93839 and 0.84176, total ranking of 38, Table 2).Table 3 clearly shows that the PSO-MLP model performed somewhat better than the HS-MLP model (overall ranking of 38, RMSE = 0.02534 and 0.09685, and R 2 = 0.97055   and 0.87381).Nevertheless, the PSO-MLP model has surpassed the HS-MLP model in strength.
The efficacy of the model that has been applied is evaluated in this part by comparing the outputs (that is, the anticipated HLs) to the goal values (that is, the estimated HLs).Figures 10 and 11  The training RMSE values for the two population sizes of 500 and 250 are 0.037298 and 0.00064246 for HS-MLP and PSO-MLP, respectively.The estimated MAEs (0.026202 and 0.019014) also indicate that both models have little training error.
The RMSEs (0.114 and 0.010324) show how the weights (and biases) modified using HS and PSO algorithms may create useful ANN during the testing phase.Additionally, both algorithms' MAEs amounts of 0.05927 and 0.053576 exhibit a respectable generalisation error.

Taylor diagram
In addition to the performance metrics mentioned in the above section, a Taylor diagram has been utilised to investigate its application in the current heat loss issue (Taylor, 2001).The Taylor diagram is a graphical method for comparing the performance of multiple models or simulations against a reference dataset.The model's simulations and observations' agreement in a quantitative view may result from the Taylor diagrams, which illustrate, for each approach, the standard deviation, and the observations' correlation.Figure 12 demonstrates the Taylor diagram for utilised approaches.In the Taylor diagram's view closely, it was ascertained that in both train and test stages, the hybrid PSO-MLP results (where an MLP process had been combined with PSO algorithm as an optimiser method) have a better outcome such that a coefficient of determination is 0.97055.Notably, the root mean square error (RMSE) for the hybrid PSO-MLP approach is lower than < 0.10.Besides the MLP model with the PSO optimiser approach, the hybrid PSO-MLP approach stands out as the best-predicted model consistent with results.
In summary, the Taylor diagram is a powerful tool for comparing and visualising the performance of multiple models or simulations against a reference dataset.Providing a comprehensive assessment of skill across multiple performance metrics can help identify models that perform better overall or in specific aspects of the variability and facilitate the evaluation of model performance in complex and multidimensional problems.

Model's restrictions
Both the PSO-MLP and HS-MLP models have certain restrictions and limitations.Here are some of the main ones:

Computational complexity
Both PSO-MLP and HS-MLP models involve the optimisation of multiple hyperparameters, including the number of hidden layers, the number of neurons in each layer, and the learning rate.This optimisation process can be computationally expensive, especially for large datasets or high-dimensional problems.

Overfitting
PSO-MLP and HS-MLP models are prone to overfitting, which occurs when the model is too complex and captures noise or random fluctuations in the training data.Overfitting can lead to poor generalisation performance and reduced new, unseen data accuracy.

Limited interpretability
PSO-MLP and HS-MLP models are black-box models, meaning they do not provide explicit information about the underlying relationships between the input and output variables.This can limit their interpretability and make understanding how the model arrives at its predictions difficult.

Sensitivity to initial conditions
Both PSO-MLP and HS-MLP models are sensitive to the initial conditions and require careful tuning of the algorithm parameters to achieve good performance.This sensitivity can make reproducing the results challenging or applying the models to new datasets or problems.

Limited applicability
PSO-MLP and HS-MLP models are most suitable for problems with continuous input and output variables and may not perform well for problems with categorical or discrete variables.In addition, they may not be suitable for problems with complex or non-linear relationships between the input and output variables.
In summary, while the PSO-MLP and HS-MLP models have shown promising results in some applications, they have certain restrictions and limitations that must be carefully considered when applying them to new problems or datasets.Careful evaluation and validation of the models are necessary to ensure their robustness and reliability.

Conclusions
This study proposes two novel estimation methods for residential building heat loss.Five input factors are also considered: the wall and coating material's heat transfer coefficients and the inside, outside, and external surface temperature.This study demonstrates a way of accurately evaluating the heat loss value of green buildings.Two alternative ANN models are considered and compared in the quest for the best technique (HS-MLP and PSO-MLP).The average accuracy of each one's heat loss predictions is also used to assess and examine its performance.
One of the key objectives in developing smart cities is to design and optimise the buildings' heat loss systems.Buildings may become more energy efficient, decrease financial losses, and have a less negative impact on the environment by using HL effectively.Both methods accurately estimate heat loss from residential structures (HS-MLP and PSO-MLP, respectively, RMSE = 0.0.03635 and 0.02534, and R 2 = 0.93839 and 0.97055).Following are some observations and conclusions based on the study's findings: -HS-MLP and PSO-MLP, two ANN approaches used in this research, were strong candidates for determining heat loss in residential structures.Particularly the suggested PSO-MLP model could estimate a building's heat loss with great reliability.-The PSO-MLP model that was suggested was a reliable method that correctly predicted the building's heat loss with a favourable outcome (RMSE = 0.02534 and 0.09685, and R 2 = 0.97055 and 0.87381).As an alternate tool for experimental measures, it should be employed.Building design techniques may also be used based on the suggested PSO-MLP method to reduce heat loss for buildings.Also, the HS-MLP model reached a high accuracy performance.
This investigation shows that ANN approaches can forecast heat loss with a reasonable degree of accuracy regarding historical data commonly recorded.Several mathematical models for calculating heat loss have been suggested in the literature review.It has been observed that previous studies have struggled to strike a balance between accuracy and simplicity when modelling heat loss.However, this article's results were great for anticipating the residential buildings' heat loss; further study is required in this issue.For instance, other methods are required to improve their accuracy, or new ANN approaches are developed utilising these methods.Future engineering issues with the scope of energy efficiency containing building design might be done utilising the methods from this study.

Figure 1 .
Figure 1.A view of different walls.

Figure 2 .
Figure 2. The graphical view of the output and input variables.

Figure 3 .
Figure 3. Variation of the input layers.(a) Wall's heat transfer coefficients-coating material's heat transfer coefficients; (b) Indoor temperature-Outdoor temperature; (c) Wall's heat transfer coefficients-External surface temperature; (d) Coating material's heat transfer coefficients-External surface temperature.

Figure 4 .
Figure 4. Variation of the input layers.

Figure 6 .
Figure 6.An MLP structure for the current article.
such as batch normalisation, residual connections, and skip connections, have been developed to improve the training of deep neural networks.
) Evaluate the objective function for each solution.(3) Determine the best solution in the harmony memory.(4) Generate a new solution by combining existing solutions in the memory (harmony improvisation).-Choose a random index i and a random value from the i-th column of the harmony memory.-Generate a new value for the i-th column based on a random probability.-If the new value is better than the current value, replace the current value with the new value.(4) Evaluate the objective function for the new solution.
Figure 7 shows their performance.According to Figure 7, an ideal HS-MLP model with a 100-swarm size and the lowest RMSE (RMSE = 0.03635 and 0.10753) and an ideal PSO-MLP model with a 250swarm size and the lowest RMSE (RMSE = 0.0.02534 and 0.0.09685) were both discovered.The previously mentioned HS-MLP has also been developed for contrast and the developed PSO-MLP model's general function evaluation.
show the testing and training stages' outcomes and the differences among each pair of output and goal heat loss.The obtained error values during the training phase for forecasting the HS-MLP (population size = 500) and PSO-MLP (population size = 250) are [−7.8264e-05,0.037506] and [−0.00060972, 0.025481], respectively.
It was introduced byTaylor (2001) as a way to assess climate models' skill in reproducing observed variability patterns.The Taylor diagram consists of a polar coordinate system where the reference dataset is located at the origin, and the models or simulations are represented by a set of points on the circle's circumference.The distance of each point from the origin represents the correlation coefficient (r) between the model output and the reference dataset.In contrast, the angle between the radial line connecting the point to the origin and the horizontal axis represents the difference in the standard deviation (σ ) between the model output and the reference dataset.The Taylor diagram provides several advantages over other methods for comparing model performance, such as scatterplots or bar charts.First, it allows for the visualisation of multiple performance metrics (correlation and standard deviation) in a single plot, facilitating the comparison of multiple models or simulations.Second, it allows for evaluating the magnitude and direction of the differences between the model output and the reference dataset, providing a more comprehensive assessment of skill.Finally, it provides a clear and intuitive representation of the trade-offs between correlation and standard deviation, allowing for identifying models that perform better overall or in specific aspects of the variability.

Figure 12 .
Figure 12.Taylor diagrams for the best-fit structures of HSMLP and PSOMLP proposed predictive networks.(a) training dataset (b) testing dataset.

Table 1 .
The optimisation outcomes of neural network.

Table 2 .
The outcomes of network for suggested HSMLP method.

Table 3 .
The outcomes of network for suggested PSOMLP method.