Design of decision-making support system in power grid dispatch control based on the forecasting of energy consumption

Abstract Electric grids are constantly expanding, and supervisory control and management methods must also be improved and changed in order to maintain reliable and safe power supply to consumers. This article proposes a methodology for supporting the adoption of dispatch decisions on the base electrical load forecasting. The energy consumption forecast is based on a deep neural network, and depending on the value obtained, a recommendation for optimising the operation of the energy system is proposed. Thus, dispatch service employees will be able to make decisions on managing the energy system based on the recommendations received, which will increase the speed of decision-making and improve the efficiency of the entire dispatch centre. Also, intelligent data processing and the proposed decisions allow us to consider and compare the factors that may be missed because of the human factor when the information is processed directly by the dispatcher. The use of retrospective data about the consumed power, the ambient temperature, and the type of day of the week is proposed as a knowledge base for forecasting energy consumption and training a neural network. The proposed neural network made it possible to achieve a value of the average absolute error of MAPE prediction of 1.922%. The obtained accuracy allows the use of forecasting results for dispatch control.


Subjects: Automation Control; Intelligent Systems; Machine Learning
Natalya Kalantayevskaya

ABOUT THE AUTHOR
The research group consists of employees of M. Kozybayev North Kazakhstan University in Petropavlovsk Kazakhstan, Civil Aviation Academy in Almaty Kazakhstan, and Institute of Information and Computational Technology MES RK in Almaty Kazakhstan. The main research activities of the group include decision support systems, signal processing, machine learning, and prediction of electricity consumption. These days, the introduction of digital technologies is taking place in almost all areas of life. This research proposes an intelligent system of dispatch decision support in process control in the electric power industry, which will automate and improve the efficiency of dispatch control of electrical systems.

PUBLIC INTEREST STATEMENT
Electricity grids are complex, versatile system that require constant monitoring and control for reliable and stable operating. This article proposes a method with which it is possible to calculate the consumption of electrical energy in advance, that is, to predict consumption for the next day. Then, based on the prediction, create a scheme of the electricity supply to consumers, which allows production at the right time and distribute a sufficient amount of energy at the required quality. In this research, predicting was carried out using artificial intelligence, namely artificial neural networks. The advantage of neural networks over other predicting methods is the ability to work with incomplete raw data and without the need to create a complex mathematical model.

Introduction
One of the most important tasks in the field of electric power industry is to provide consumers with the electric energy on the reliable and stable base. Due to the rapid economic development, the structure and scale of power grids are becoming increasingly complex and the operation and management of large power grids are facing more serious problems. The human factor, equipment failures, natural disasters, and other internal and external factors can pose a great threat to safe and stable operation of the power system. The traditional protection strategy based on local information can no longer satisfy the needs of safe and stable operation of complex power systems (Jiao & Fushuan, 2008).
All power system equipment necessary for the production and further distribution of electricity is controlled by the dispatching services of the power system entities or directly by operational personnel. The control objective of the power control centre is to maintain nominal frequency and be sure that all facilities are operated under a normal situation. If there is a fault event, quick actions are needed to release the fault as well as minimise the number of customers affected/ blackout area (Liu et al., 2013). Almost all power supply control centres in the world use the supervisory control and data acquisition systems (SCADA), power failure analysis and processing systems (PFAP), error registration systems, etc. in order to monitor, analyse, and manage power systems. SCADA/EMS main concepts and structures were laid down in the 1970-1980s, and the amount of data collected by the system and available to the operator was limited (Kropp, 2006;Luo, 2004;Wu et al., 2016). With the latest generations of SCADA/EMS, the data available to the system operator have increased exponentially (Liu et al., 2013).
In emergency situations and malfunctions, due to the lack of unified comprehensive tools for processing and analysing data about malfunctions, the influx of information makes it difficult for operators to filter information accurately and identify malfunctions, and the dispatcher is unable to accurately judge a malfunction, find useful information, analyse the distribution of power flow, redistribute the flow power, and change the network topology, that is, perform the analysis in accordance with a large amount of data, which leads to the fact that he loses time to deal with failures and causes of an accident (Wu et al., 2016), (Liu et al., 2013), (Chen et al., 2018). So, the inefficiency of the analysis depends on the experience of the person (Luo, 2004).
On the other hand, the use of various information and automated systems in the energy sector has accumulated a significant amount of statistical data that characterise various aspects of equipment operation: the values of various technological parameters, information about registered defects and malfunctions, and additional heterogeneous information. These data can be used to obtain new knowledge about energy facilities, create management systems, and support decision-making (Andryushin et al., 2019), (Lu, 2017). The latest developed information and communication technologies provide a platform for improving the functions and performance of power management centres (Kropp, 2006).
The issue of improving the quality of the energy system functioning becomes a need for accurate decision-making, which is increasing (Hemmati et al., 2020).
In order to operate a complex system safely, such as a large-scale power system, it is necessary to consider the human factor and visualisation methods when designing a control centre in order to improve the performance and decision-making of the system operator in emergency situations (Liu et al., 2013). Contemporary problems with the electrical grid require innovative solutions that include various methods of machine learning and network sciences (Giannakis et al., 2013). Also, in the traditional grid management, the grid operator must always maintain the balance between supply and demand to avoid security grid problems and economic losses. The grid operator uses a planning to ensure that power plants produce the right amount of electricity at the right time to meet consistent and reliable electric demands (Fentis et al., 2019).
Load is fundamental and vital information for power generation facilities and traders, especially in production planning, day-to-day operations, unit commitment and economic dispatch (Hahn et al., 2009).
In this article, compared to existing dispatch control systems, a new system is presented that uses the results of forecasting energy consumption by a neural network in dispatch control. An approach is proposed in which an accurate and fast assessment of the energy load in the period of one hour ahead will be used in making dispatch decisions. Also, compared to the most common method of forecasting the load, where the forecast is based on historical data, the authors added the hourly temperature and day off parameter, which improved the accuracy and productivity of the forecasting system. Thus, a new intelligent method for forecasting the load and further analysis of the forecast are presented, which implements the exchange of information with the dispatch service and offers an auxiliary solution or warning, that is to say, it provides technical support to the decision-making personnel in managing and planning the operation of the power system.
In Chen et al., 2018, the intelligent auxiliary decision-making system of power grid accident management is located in the EMS platform, which shows the characteristics of the accident treatment of the regional power grid and provides intelligent technical support for fault diagnosis and method of power grid operation conditions. In the event of an accident abnormality, the simple and complex faults in the power grid can be analysed by capturing the realtime switching variable tidal current data and protecting the security information of the selfassembly device, so that the fault diagnosis and disposal scheme can be realised quickly and accurately, reduce the risk of misplaced and misuse, and improve the efficiency and accuracy of accident handling.
The research in Chandler et al., 2014 develops a controller for transactive resources, which functions to integrate real-time data from a SCADA system or other sources, as well as the dispatch optimising software for each independent smart grid system control layer, at a minimum. Each simulator in the system may choose independent operating strategies based on their own unique instance of the controllable resource portfolio through time, which a contextual controller may choose from, or mix, based on real-time requirements concerning timing or least cost. Here, we develop a centralised control platform to manage such a combination of dispatch simulators using a database (with embedded logic). This has several advantages. First, the variable data have little system infrastructure to traverse by way of web services, reducing the time required to compute the data. Second, containing the control layer within the database, the system mimics a PLC resource, allowing the controller to integrate seamlessly with a SCADA system. The controller has the following structural elements for passing data to simulators and managing their output.
Based on the status analysis of IPDG incremental power distribution grid projects and operation goals of PGE, sitting in the opening of IPDG business, Dong et al., 2019 puts forward a decisionmaking scheme for investing in IPDG projects and explains every specific implementation method and technical roadmap in different phases. The decision scheme takes into account the electrical load demand forecast that is affected by many factors, such as the introduction of distributed generation, reform of electricity price-setting mechanisms, and user interactive services.
One of the concepts in Liu et al., 2013 introduced to the power dispatch cockpit is the functions required by different management levels. For example, the system operators need all monitoring and control functions in the control centre. The upper-level managers could have smaller cockpits in their offices. Only information related to their management jobs is displayed in their office cockpits. Different departments have different access rights to operation information and different authorisations on controlling the system. Celik et al., 2013 investigated a novel dynamic data-driven adaptive simulation (DDDAMS) framework that is designed for the efficient and reliable real-time dispatching of electricity under uncertainty. The proposed framework includes 1) a database receiving data from electrical and environmental sensors of a power grid, 2) an algorithm for online state estimation of the demand nodes in the considered electrical grid using particle filtering, 3) an algorithm for effective culling and fidelity selection in simulation considering the trade-off between the computational requirements of simulations and accuracy of anticipated dispatch results in terms of environmental and economic costs, and 4) data-driven simulation for mimicking the system response behaviour and generating a dispatch configuration, which minimises the total operational cost and power loss of the system, without posing security risks to the energy network.
In Cheung et al., 2010), the creation of the Smart Dispatch system is proposed. One core function of Smart Dispatch is the Generation Control Application (GCA), which aims at enhancing operators' decision-making process under changing system conditions (load, generation, interchanges, transmission constraints, etc.) in near real time.

Proposed decision-making support system
In structural relationship, the methodology for decision-making support is described by the system represented in Figure 1 and includes the data input unit (DIU), unit of energy consumption forecast (UECF), database (DB) unit for neural network training, unit of support of dispatching decisions taking (USDDT).
The energy consumption is a multiple-factor process. In the research, the energy consumption model is accepted as a non-linear function as follows (P.V. Belyaev Koshekov et al., 2019): where � W is the energy consumption, W 0 1 is the energy consumption for the past day, W 0 2 is the energy consumption on a similar day in the past year, T is the ambient temperature on the date of actual energy consumption, Q is the type of weekday (working, weekend, or holiday), and t is the hour of the day.
The energy consumption has sustainable systematic variations over time, and the amount of energy consumed on a similar date in the past year provides a basis for forecasting, a trend component of forecast. The medium air temperature with 3 hours gradation on the considered date is taken as ambient temperature. The type of weekday is also a significant parameter. The Q parameter is entered for classification of weekdays, The main point of methods ( Figure 1) is as follows: DIU generates data transferred to UECF. This information is necessary for build-up of energy consumption forecast one day in advance. The list of data, the effect of which to forecast precision is established experimentally, is transferred, specifically external temperature forecast for projected days, energy consumption at one-hour intervals one day preceding the forecasted day, energy consumption with one-hour intervals on a similar date in the past year, type of forecasted day, and whether it is a weekend or working day. The information is required for database seeding and overtraining of the neural network. For the actual energy consumption on the day before and actual temperature on the day before, refer to this information. The forecasting unit builds up the energy consumption forecast using the deep neural network. DIU and UECF are connected by data bus for information loading. The controlling action is provided as well, and the dispatcher may issue a command for forecast management or retrain the network. UECF is connected with DB, which keeps the retrospective information on energy consumption over the past few years, information entered by the dispatcher for forecasting of load and retraining of the neural network also comes and gets stored there. The connections between the UECF and DB are performed along the data bus. The forecasted energy consumption is the outcome variable of the forecasting unit. This value is transferred to UECF for further evaluation of forecast accuracy and to USDDT. This unit analyses the incoming information and issues recommendations for execution of dispatch control.
The special feature of proposed engineering solution is the availability of unit for decisionmaking support. The unit is intelligent and depending on forecasted energy consumption, according to the algorithm in Figure 2, issues recommendations for execution of dispatch control. This unit is open and subject to replenishment and expansion, input of new recommendations, and conditions of its application. The replenishment is performed by the expert.
The forecast-based load value may be above the level of consumption or inversely, consumption of less than the statistical average value. The deviation on either side may adversely affect the operation of the electric power system. Having a forecast for such an event in advance, there is an opportunity to perform various dispatching changeovers, ensuring reliable electric power supply to consumers, allowing reduction in the overload, and thus the wearing of equipment and elements of the electric power system. For example, a USDDT will issue the following recommendations: put into operation the standby transformer in a certain transformer plant, re-energise the transformer plant from other distribution stations, or by any other cable line, which carrying capacity does not conform to the forecasting load, and alarm on possible emergency situations and overloading and need for equipment placement to energy-saving mode. The suggested recommendations may be informative directly for dispatching department and for operation, maintenance, and repair personnel, chief engineer of the plant, and Economy Department.
Also, USDDT is equipped with the training unit, and management of this unit is performed by the expert. The expert estimates the efficiency of one or other recommendations, introduces new  Turn-on of stand-by cable line recommendations, and specifies numerical values of forecasted load under which the specific recommendation is applied.
The system functional architecture of the decision-making system is shown in Figure 3.
The enlarged suggested methods of decision-making support consist of the following stages: forecasting of energy consumption, which is performed using the artificial neural network, and use of the resultant value for selection of controlling action, providing means for power distribution in the electric power system in an optimal manner.

Electrical load forecasting
UECF is built based on the application of fundamental provisions of artificial intelligence technology. The mean absolute percentage error (MAPE) is used for appraisal of forecast accuracy, where W aj is the actual energy consumption, W fj is the energy consumption forecasted by UECF, and N is the number of examples in the learning sample.
The specific nature of energy consumption is difficulty in establishment of the functional relationship between factors affecting the energy consumption and actual load, and the degree of each factor impacts it individually and its combinations.
The most adequate method of forecasting enabling us to take into account the special features of energy consumption process is the artificial neural network since it makes it possible to perform a deep data analysis, establishes dependencies between the values not related clearly, and demonstrates a speed work while processing data bulks. Thus, database for energy consumption over 2 years will contain on the order of 16 thousand lines and 100 thousand variables, and the energy consumption process is continuous, considering that this constant data increment and increase of databases will take place. For correct operation of the neural network, the need for constant adding of new data to the learning sample appears.
Scientific research for forecasting of the electricity market of Kazakhstan has not been conducted practically. Also, there is still a certain technology gap between engineering development, software tools of artificial intelligence, and opportunities of its practical application (К. Т. Koshekov et al., 2018). The target of research in this paper is the power supply system of Northern region of Kazakhstan.
The method of deep computer-aided learning, specifically the use of the deep neural network, underlies the UECF operation. The advantage over other forecasting methods consists of high efficiency in the deep neural network, effective work at large data volumes, growth of data quantity, and adaptability to different objects and processes.
UECF is an architecture analysing input information across several hidden layers of the deep network. The operation principle of UECF in forecasting involves data submission to the input layer, further deep learning of the neural network takes place, and the outcome parameter is the predicted variable. Thus, UECF contains one input and one output layer and five hidden layers.
In such a manner, constructed UECF allows processing a large amount of input information for a short time, has opportunities to build up dependencies at undescriptive input information, and allows discovering the hidden dependencies between inputs and outputs.
The listed parameters in formula (1) affecting the energy consumption are input data of UECF. The outcome parameter is the projected energy consumption at a specified hour/day.
The mathematical description of UECF operation consists in finding of such weight coefficients enabling us to minimise a mismatch error between the list of input data and output variable (New information technology in the tasks of operating control of electric systems/N. А. Manov et al., 2002), where y ij and d ij are the actual and desired response of the j-output layer neurone to the i-input vector; accordingly, p is the number of examples in the learning sample, and m is the number of neurones in the output layer.
The generated system characterising the suggested methods was implemented using extension package of environment-MATLAB Deep Learning Toolbox. MATLAB is a software environment for solution of engineering problems, main advantages of which include openness of the system, enabling user to adjust and modify the built-in functions of this environment independently.

Neural network training algorithm
The deep neural network was trained using the Levenberg-Marquardt algorithm. The Levenberg-Marquardt algorithm was independently developed by Kenneth Levenberg and Donald Marquardt (Levenberg, 1944), (Marquardt, 1963) The Levenberg-Marquardt (LM) algorithm can be regarded as a linear combination of the Gauss-Newton (GN) method and the Gradient Descent (GD) method. The alternation between these two methods is called a damping strategy and is controlled by a damping factor. If the damping parameter is large, the LM adjusts parameters like the GD method. If the damping parameter is small, the LM updates parameters like the GN. GD, GN, and LM methods are the optimisation algorithms for the basic Least-Squares (LS) problem, i.e., they use LS to fit data. Fitting requires a parametric model that releases the response data to the predictor data with coefficients. The LS method minimises the summed square of the residuals, with a residual being the difference between an observed value and the fitted value provided by a model (Protić, 2015).
The main expression of Newton's methods is the expression (Osovsky, 2002) p k ¼ À ½Hðw k Þ� À 1 gðw k Þ; ( 3) where p k is a direction that guarantees the achievement of minimum value of the objective function for a given step, g w k ð Þ is a gradient value at the last decision point w k , and H w k ð Þ is a Hessian value at the last decision point w k .
Using the Levenberg-Marquardt algorithm, the exact Hessian value h (w) (3) is replaced by the approximated value g (w), which is calculated based on information contained in the gradient, considering some regularisation factor.
In order to describe this method, let us suppose the objective function in a form, corresponding to the existence of a single training set, Using the values GðwÞ ¼ ½JðwÞ� T JðwÞ þ RðwÞ; where R w ð Þ is the Hessian components H w ð Þ, containing higher derivatives compared with w.
The essence of the Levenberg-Marquardt approach is approximating R (w) using the regularisation factor vl, where the variable v, called a Levenberg-Marquardt parameter, is a scalar quantity that changes during the optimisation process. Thus, the approximated Hessian matrix at the k-th step of the algorithm becomes (Osovsky, 2002) At the beginning of the learning process, when the actual value of w k is still far from the desired solution (the value of the error vector e is high), the value of the parameter v k is used, which much greater exceeds its own matrix value J w k ð Þ ½ � T J w k ð Þ]. In this case, the Hessian is replaced by the regularisation factor, Gðw k Þ e ¼v k l; and the direction of minimisation is chosen by the method of steepest descent, As the error decreases and the desired solution becomes closer, the parameter v k goes down and the first term in formula (7) begins to play an increasingly important role.
The efficiency of the algorithm depends on a proper selection of the v k value. A too large initial value of v k , as optimisation progresses, must decrease to zero when an actual solution that is close to desired is obtained. There are different ways to select this value, but we will describe the only one original technique proposed by D. Marquardt (Marquardt, 1963). Let the values of the objective function at the k and (k-1) steps of the iteration are designated as E k and E kÀ 1 , and the values of the parameter v at the same steps are denoted as v k and v kÀ 1 . The reduction coefficient of the value will be designated as r, where r > 1. In accordance with the classical Levenberg-Marquardt algorithm, the value of v changes according to the following scheme expression (Osovsky, 2002): If E v kÀ 1 r À � >E k and E v kÀ 1 ð Þ>E k , increase steadily m times the v value until reachEðv kÀ 1 r m � E k is reached, at the same time assuming v k ¼ v kÀ 1 r m .
Such a procedure of v value modification is performed till the moment when so-called display fidelity coefficient q, calculated by the formula reaches the value close to one.
Herewith, the quadratic approximation of the objective function has a high degree of coincidence with the true values, which indicates that the best solution is close. In such a situation, the regularisation factor v k1 in formula (8) can be omitted (v k ¼ 0), the process of the Hessian determination leads to a direct approximation of the first order, and the Levenberg-Marquardt algorithm turns into a Gauss-Newton algorithm, characterised by quadratic convergence to the optimal solution (Osovsky, 2002).

The procedure of data input and neural network training
The algorithm shown in Figure 4 shows a procedure of data input and neural network training on its basis. The operation of the algorithm begins with the initial data input, information about the weather forecast on the predicted day, as well as retrospective data on energy consumption for the last day and the same day last year, which are entered. In this regard, in order to carry out the forecast, it is necessary to have a retrospective database for at least one calendar year. When entering the type of the day of the week, a check is made on whether the day belongs to a weekend or a working day and the necessary value will be further processed (Kalantaevskaya et al., 2019).
Furthermore, the entered initial data are subject to normalisation within [0,1]. The normalisation procedure includes bringing different types of data to one form. This form of recording is necessary for training a neural network.
Data normalisation is performed according to the following formula: where Y is the normalised variable value, Y is the actual, non-normalised variable value, Y min is the minimum variable value in data basis, and Y max is the maximum variable value in data basis.
The normalisation of data was performed using the MATLAB function ( Figure 5).
Normalised data enter the neural network entrance, where its training takes place as follows. The initial weighting factors w k are randomly generated, and the network error E k is estimated. The next weighting factor w kþ1 is selected, and the total error E kþ1 is estimated.
If the current total error increases as a result of updating the weighting factor, the weighting factor is dropped to its previous value and the coefficient μ increases by 10 times. If the current total error decreases as a result of the update, then the new weighting factor is saved as a current one and the coefficient μ decreases by 10 times. The procedure is repeated until the current general error is less than the required value (Hao & Wilamowski, 2011).
At the last stage, the neural network gives a forecast of energy consumption.

Experimental results and discussion
The data on load consumed by Petropavlovsk, North Kazakhstan region, Republic of Kazakhstan, for the period of 01/05/2016 to 30/04/2018 have been collected for UECF learning. The main generating company in the North Kazakhstan region is Petropavlovsk CHP-2 "SEVKAZENERO" JSC. The installed electric capacity of this station as of 1 January 2019 is 541 MW. For the 1st quarter of 2018, the total generated electric energy amounted to 872.1 million kW per hour. The power plant has a connection with the power system of Kazakhstan via overhead high-voltage power lines OPL-220 kW '2711', OPL-220 kW '2721', OPL-110 kW 'Siberia'. The total share of energy produced in the energy system of Kazakhstan is 3% (Report on the functioning of the electric energy and capacity market for the 1st quarter of, 2018).
The volume of consumer power is accepted at one-hour intervals, and therefore, a learning sample for two calendar years equals to 17,746 lines. The data affecting the consumption-ambient temperature and type of weekday-have been added to the teaching problem book aside from data on consumed power. The extracts from the teaching problem book are presented in Figure 6.
The database was generated in Microsoft Office Excel program since the MATLAB environment interacts with this program and allows for downloading data in.xlsx format.
The parameters shown in Figure 7 are the examples of input data supplied to UECF input. A number of first layer neurones are determined by the scope of learning sample. A number of hidden layers are determined by experiment in the process of learning, and suggested deep neural network has amounted to 5 layers. A neural network, having identified the weight characteristics of connections between neurones, detects the consumed load within the considered day. The actual consumption of electric energy is an outcome parameter of the network.
The weather forecast is one of the overriding factors in forecasting of energy consumption (Chen et al., 2018). The correlation factor between the ambient temperature and consumed load in the considered learning sample amounts to 0,302,022. The correlation between the mentioned variables for 1 month-July 2018-is shown in Figure 8.
The type of load is different in various weekdays. By comparison of loads on holiday 1 May 2018 and working day 15 May 2018, it was found that the consumed load on holiday was less than that on a working day by a mean of 16.5%. (Figure 9). In such a way, in the learning sample, the weekdays are classified by the following attribute, if the current day is a working day, or referred to weekends and holidays.   All data of the learning sample have been normalised to the values within the range of 0 to 1 so that all data values are in the same range and have the similar impact on network training.
The neural network training was performed in 3 stages. For that end, a learning sample is divided into 3 segments, training, control, and test, enabling us to train the neural network and verify its working capacity. By an experimental approach, it was established that the best result was achieved under the following percentage ratio: training segment-50%, control segment-25%, and test-25%.
The neural network learning process takes place in several epochs; the graph of the error change with each next epoch is presented in Figure 10, and the greatest accuracy is achieved at epoch 26.
Based on the chart in Figure 11, it can be seen that training on the network has a correlation coefficient (R) equal to 0.99. This correlation coefficient is obtained based on the linear regression plot mechanism. After the training process, curve matching is done to compare the suitability between the network outputs with the training target (Syahputra & Dhimas Syahfitra, 2018).
As a result of modelling, the example of actual and projected energy consumption for 01/07/ 2018 is shown in Figure 12.
A complex of characteristics was obtained, and its analysis has allowed us to draw the following conclusion: when using the suggested deep neural network for short-term forecasting one day in advance, the value of a forecast mean absolute error (MAPE) is 1.922%. The calculation of the forecast mean absolute error (MAPE) value was performed according to formula (2).
Additionally, it was found that it is necessary to take into account the environment temperature since the Kazakhstan climate assumes the dynamic temperature variations within the limits of [0 � 10] degrees in one day. The entry of the temperature parameter to the learning sample has allowed improving the forecasting results by 1%. The entry to the learning sample of the Q parameter indicative of identity of a day as a weekend or working day has allowed us to decrease the value of mean absolute error by 0.2%.
For practical approval of the decision support method, the developed system was tested under conditions of the North Kazakhstan Regional Electric Distribution Company JSC. A transformer plant being on the balance sheet of the North Kazakhstan Regional Electric Distribution Company JSC, specifically TS 10/0,4 kV No. 304, was selected. This transformer plant has the capacity of 2х400 kVA: one transformer is working, and the other is a standby transformer. The transformer plant is powered from two mutually redundant sources DS 110/ 10 kV No. 7 and No. 11. The permanent supply is obtained from DS 110/10 kV No. 7. The energy consumption analysis for TS 10/0,4 кW №304 has demonstrated that the power output within [260 � 370] kW is optimal energy consumption during the summer period. According to the suggested algorithm with forecasted energy consumption above 380 kW, there is a need for putting into operation of the standby transformer. For the period when the power is more than 450 kW, it is necessary to put into operation the stand-by cable line and make a supply from DS 110/10 kV No. 11 since the carrying capacity of cable line from DS 110/10 kV No. 7 does not provide the requirement for the desired value of voltage drop, under that power. The increment in voltage drop results in the increase in electric loss in the system.
The operation of the decision support system for TS 10/0,4 kV No. 304 in general form is presented in Figure 13.
The recommendations for dispatch control provided by the decision support tool make it possible to enhance reliability of power supply and reduce the loss of network voltage, which will enhance the quality of electric energy and beside that have an economic effect allowing for reducing the cost value of electric power transmission.

Conclusion
Information about the current state of the power system is often insufficient to make effective decisions on power system management, planning, and implementation of measures to maintain power equipment in its most efficient condition. This article proposes a combined system for forecasting electrical energy and support for making dispatch decisions. The prediction of energy consumption by a neural network was used in this system as a tool for dispatch control, and thus, a new possibility of using the data obtained as a result of prediction is presented. The following algorithms were also developed: the algorithm for the functioning of the decision support unit and the algorithm for the process of setting the initial data and training the neural network. The results show that by simulating the load one hour ahead, in many cases, the control room operator can make more effective decisions. Information about future energy consumption facilitates decisionmaking and effective implementation of planning, monitoring, and management functions.