Evaluation and Comparison of Different Artificial Neural Networks and Genetic Algorithm in Analyzing a 60 MW Combined Heat and Power Cycle
الموضوعات :parisa ghorbani 1 , Arash Karimipour 2
1 - Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
2 - Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
الکلمات المفتاحية: Energy Efficiency, Genetic Algorithm, Irreversibility, Neural Network, Steam Power Plant ,
ملخص المقالة :
The constant growth of energy consumption, increased fuel costs, non-renewable fossil fuel sources, and environmental pollution caused by increased emission of greenhouse gases, and global warming highlight the need for the analysis and optimization of main energy generation bases, i.e. power plants. The Artificial Neural Network (ANN) is a useful novel method for better processing information and controlling, and optimizing and modeling industrial processes. For the first time in this study, an ANN was designed and applied to data extracted from modeling and analyzing a 60 MW combined heat and power generation power plant. To this end, the error backpropagation network was selected as the optimal network, and the generator load or capacity, condenser pressure, and Feedwater temperature were considered inputs to the ANN. The energy and exergy efficiencies of the power plant and the overall energy and exergy losses of the cycle were considered outputs of the ANN. The ANN was coded and designed with the help of MATLAB. The Genetic Algorithm (GA) was used to obtain the optimal values of input parameters and the minimum losses and maximum efficiencies based on the first and second laws of thermodynamics.
[1] Kim, M. J., Kim, T. S., Flores R. J., and Brouwer, J., Neural-Network-Based Optimization for Economic Dispatch of Combined Heat and Power Systems, Applied Energy, Vol. 256, 2020.
[2] Nikbakht Naserabad, S., Mehrpanahi, A., and Ahmadi, G., Multi-Objective Optimization of HRSG Configurations on The Steam Power Plant Repowering Specifications, Energy, Vol. 159, 2018, pp. 277-293.
[3] Luo, X. J., Juan Manuel Davila Delgado, A. O., Owolabi, H. A., and Ahmed, A., Genetic Algorithm-Determined Deep Feedforward Neural Network Architecture for Predicting Electricity Consumption in Real Buildings, Energy and AI, Vol. 2, 2020.
[4] Li, H., Zhen-yu, Zh., The Application of The Immune Genetic Algorithm in Main Steam Temperature of PID Control of BP Network, Physics Procedia, Vol. 24A, 2012, pp. 80-86.
[5] Hosseinalipour, S. M., Mehrpanahi, A., and Mobini, K., Full Repowering to Enhance the Technical-Economic Specifications of a Steam Power Plant, Mechanical Engineering Journal, Tarbiat Modarres University, Vol. 11, No. 1, 2011, pp. 1-18.
[6] Mehrpanahi, A., Hosseinalipour, S. M., and Seijanivandi, S., Multi-Objective Optimization of Parallel Feedwater Heating Repowering of a Steam Power Plant by The Genetic Algorithm, Amir Kabir Journal (Mechanical Engineering), Vol. 45, No. 1, 2013, pp. 93-108.
[7] Holland, J. H., Adaptation in Natural and Artificial Systems, Ann Arbor, MI: University of Michigan Press, 1975.
[8] Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley, 1989.
[9] Montana, D. J., Davis, L., Training Feedforward Neural Networks Using Genetic Algorithms, In Proceedings of the 11th International Joint Conference on Artificial Intelligence, Morgan Kaufmann, San Mateo, CA, Vol. 1, 1989, pp. 762–767.
[10] Whitley, D. A., Genetic Algorithm Tutorial, Stat Comput., Vol. 4, No. 2, 1994, pp. 65-85.
[11] Nguyen Q., et al., Performance of Joined Artificial Neural Network and Genetic Algorithm to Study the Effect of Temperature and Mass Fraction of Nanoparticles Dispersed in Ethanol. Mathematical Methods in the Applied Sciences, 2020, DOI: 10.1002/mma.6688.
[12] Ghorbani, P., Smida, K., Razzaghi, M. M., Yazd, M. J., Sajadi, S. M., Bagherzadeh, S. A. and Inc, M., Modeling and Thermoeconomic Analysis of a 60 MW Combined Heat and Power Cycle Via Feedwater Heating Compared to A Solar Power Tower, Sustainable Energy Technologies and Assessments, Vol. 54, 2022, pp. 102861.
Int. J. Advanced Design and Manufacturing Technology, 2023, Vol. 16, No. 4, pp. 23-35
DOI: 10.30486/admt.2024.1983665.1405 ISSN: 2252-0406 https://admt.isfahan.iau.ir
Evaluation and Comparison of Different Artificial Neural Networks and Genetic Algorithm in Analyzing a 60 MW Combined Heat and Power Cycle
Parisa Ghorbani
Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
Email: parisaghorbani.71@gmail.com
Arash Karimipour *
Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
Email: arashkarimipour@gmail.com
*Corresponding author
Received: 9 April 2023, Revised: 29 October 2023, Accepted: 4 November 2023
Abstract: The constant growth of energy consumption, increased fuel costs, non-renewable fossil fuel sources, and environmental pollution caused by increased emission of greenhouse gases, and global warming highlight the need for the analysis and optimization of main energy generation bases, i.e. power plants. The Artificial Neural Network (ANN) is a useful novel method for better processing information and controlling, and optimizing and modeling industrial processes. For the first time in this study, an ANN was designed and applied to data extracted from modeling and analyzing a 60 MW combined heat and power generation power plant. To this end, the error backpropagation network was selected as the optimal network, and the generator load or capacity, condenser pressure, and Feedwater temperature were considered inputs to the ANN. The energy and exergy efficiencies of the power plant and the overall energy and exergy losses of the cycle were considered outputs of the ANN. The ANN was coded and designed with the help of MATLAB. The Genetic Algorithm (GA) was used to obtain the optimal values of input parameters and the minimum losses and maximum efficiencies based on the first and second laws of thermodynamics.
Keywords: Energy Efficiency, Genetic Algorithm, Irreversibility, Neural Network, Steam Power Plant
Biographical notes: Parisa Ghorbani received her MSc in Mechanical Engineering from the Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran in 2021. Her current research interest includes the optimization of energy generation stations and turbo machines. Arash Karimipour is an Associate Professor of Energy Conversion at the Department of Mechanical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran. He received his PhD in Mechanical Engineering from Sistan & Baluchestan University of Iran. His current research focuses on fluid flow, thermodynamics, heat transfer and energy.
1 Introduction
The vital role of electricity in the industrial and economic infrastructure and the huge amounts of investment in this sector makes proper utilization and efforts necessary for optimizing the power industry. Most thermal power plants are old but play a key role in supplying the required electricity. Identifying the minimum and maximum energy losses in different parts of power plants and estimating the optimal first- and second-law efficiencies with the maximum values for components and the power plant greatly help the proper design and improvement of thermal power plants bringing useful environmental and economic outcomes. Different methods are used for repowering and optimizing to achieve better performance in power plants. The ANN and GA are useful methods for this purpose and have been extensively used in the literature. Jack et al. used an ANN based on the simulation results of a physical model for optimizing and minimizing the costs of a CHP plant. According to their results, the ANN reduced the time required for analyzing the system by more than 7000 times. The optimization results also confirmed the role of exact prediction of the performance of each piece of equipment using a physical model. The computational time decreased in this study while improving the optimization accuracy [1]. In another study, after modelling and obtaining the results for different modes in Bandar Abbas Steam Power Plant, Nikbakht et al. optimized the power cycle by the GA using the exergy efficiency and power generation cost functions. The Pareto diagram displaying cost variations versus the exergy efficiency plays a key role in selecting the proper investment mode [2]. The GA and the feedforward ANN with multiple hidden layers were considered for the optimal estimation of daily power consumption in a real university building in the UK. Considering the significant relationship between the influential factors and power consumption in the real world, the use of multiple hidden layers improves the prediction accuracy of the ANN. The optimal architecture of the model is generally determined by a very complex time-consuming trial-and-error process. To cope with this problem, the GA was used for the automatic design of an optimal architecture with improved generalizability. Data measured during 1.5 years was used for training and testing the proposed model [3]. An intelligent GA-based ANN was employed to deal with the estimation error, long delays, high inertia, and the nonlinear nature of the steam temperature controlled in the power plant. The GA allows optimization, global searching, rapid convergence, and improvement of the network weights. The simulation results confirmed the superiority of the intelligent ANN control system over the conventional control system in terms of control and robustness [4].
Full repowering methods can be used as efficient, previously experienced, and generalizable techniques considering the large number of old power plants and the need to rebuild the vital power generation sector. This method is usually used for repowering power plants at the end of their useful life. In such cases, the initial capital costs significantly decrease as compared to the construction of a combined cycle with the same specifications. Taking into account the unit price of electricity and the exergy efficiency of Besat Power Plant as objective functions, Hosseinalipour et al. obtained the most optimal technical-economic specifications of the repowering cycle of Baset Power Plant using the GA optimization technique in single- and two-objective optimization scenarios. Using the full repowering method and GA optimization, a 12-17% increase was obtained in the thermal efficiency [5]. Adding a gas turbine to the steam power plants is also known as a repowering method to enhance the specifications of the steam cycle and heat recovery from additional cycles. Repowering methods can be divided into partial and overall repowering techniques. Parallel Feedwater heating is considered a novel partial repowering method. Mehrpanahi et al. applied this method to Shahid Rajaei Power Plant. The electricity price and exergy efficiency were considered objective functions in the single- and two-objective optimizations. The use of the GA led to reasonable results for improved performance of the cycle [6].
In this study, an ANN is used for processing data extracted from analyzing and modeling the considered power plant. The GA [7-12] is used to obtain the optimal values for the variable thermodynamic parameters of the cycle, the minimum losses, the maximum first- and second-law efficiencies, and the power generation conditions with the maximum efficiency.
2 problem statement
For the first time in this study, an ANN is designed and applied to data extracted from modeling and analyzing the considered power plant. The error backpropagation network was selected as the optimal network, and the generator load or capacity, the condenser pressure, and the Feedwater temperature were considered inputs to the ANN. Power plant energy and exergy analysis (first and second laws of thermodynamics), and the overall energy and exergy losses of the cycle were considered the ANN outputs. The generator loads and capacities were 15, 30, 45, and 60 MW, and the Feedwater temperatures were 200, 210, 220, and 230˚C. The condenser pressure ranged from 0.050 to 0.175 bar. These values were considered inputs to the designed ANN. Considering these inputs, 96 data were extracted from thermodynamic modelling of the cycle for each output.
3 ann architecture and training
Neural networks can be regarded as information processing systems. By training the ANN with the learning algorithms, the neural network makes the output vector closer to the target by changing and modifying the weight vector and biases.
This study aims to achieve optimal conditions for a power plant with maximum efficiency and minimum energy and exergy losses. The results are analyzed by the error backpropagation ANN in which the output error is computed by comparing the output rate with the desired value or the experimental value. The network has a feedforward architecture in which input data to the network are processed feedforward, and the output of each layer barely affects the next layer. Figure 1 schematically displays the architecture of the designed ANN.
Fig. 1 The architecture of the error backpropagation network [11].
To analyze the results using the error backpropagation network, the generator load or capacity, the condenser pressure, and the Feedwater temperature were considered inputs to the ANN. The energy and exergy efficiencies of the power plant and the overall energy and exergy losses of the cycle, were considered the ANN outputs. The ANN was coded and designed with the help of MATLAB.
Considering the role of ANN training and architecture in the output of the improved network performance, the network architecture was examined by varying the number of neurons in the hidden layer (10, 15, and 20 neurons). Various training methods were considered for the error backpropagation networks, and the best training method and network architecture were selected for outputs. The results are given in “Tables 1 to 4”.
After multiple tests and comparison of the performance of the ANN with different architectures and training methods, the error backpropagation network via a single secret layer and 10 neurons trained by the Levenberg-Marquardt algorithm was considered the best network with a good performance for analyzing the study results. “Tables 1 to 4” show a significant decrease in the mean squared error (MSE) of all four outputs in the Levenberg-Marquardt algorithm. The learning rules of the Levenberg-Marquardt algorithm are as follows:
(1)
(2)
Where, and e shows the error (difference) between the existing output and the desired worth. checks the convergence quickness so that it increases to accelerate training as the error increases. In contrast, it decreases to guarantee convergence of the algorithm when the error decreases.
Table 1 Contrast of network architectures and training styles for the initial output
Training type | Neurons numbers in the hidden layer | Mean square error (MSE) | |
MATLAB default | 20 | 0.00094 | |
MATLAB default | 15 | 0.0000092 | |
MATLAB default | 10 | 0.000000514 | |
Gradient descent training (traingd) | 10 | 0.45 | |
Gradient descent with momentum training (traingdm) | 10 | 0.88 | |
Variable learning rate (traingda) | 10 | 0.22 | |
Variable learning rate (traindgx) | 10 | 0.058 | |
Resilient back propagation (trainrp) | 10 | 0.008 | |
Conjugate gradient (traincgf) | 10 | 0.046 | |
Conjugate gradient (traincgp) | 10 | 0.028 | |
Conjugate gradient (traincgb) | 10 | 0.017 | |
Conjugate gradient (trainscg) | 10 | 0.013 | |
Quasi-Newtonian (trainbfg) | 10 | 0.0024 | |
Quasi-Newtonian (trainoss) | 10 | 0.17 | |
Levenberg Marquardt (trainlm) | 10 | 0.00000215 |
Table 2 Contrast of network architectures and training styles for the second output
Training type | Neurons numbers in the hidden layer | Mean square error (MSE) |
MATLAB default | 20 | 0.000266 |
MATLAB default | 15 | 0.00000677 |
MATLAB default | 10 | 0.000000136 |
Gradient descent training (traingd) | 10 | 0.163 |
Gradient descent with momentum training (traingdm) | 10 | 5.06 |
Variable learning rate (traingda) | 10 | 1.27 |
Variable learning rate (traindgx) | 10 | 0.16 |
Resilient back propagation (trainrp) | 10 | 0.095 |
Conjugate gradient (traincgf) | 10 | 0.015 |
Conjugate gradient (traincgp) | 10 | 0.146 |
Conjugate gradient (traincgb) | 10 | 0.084 |
Conjugate gradient (trainscg) | 10 | 0.015 |
Quasi-Newtonian (trainbfg) | 10 | 0.003 |
Quasi-Newtonian (trainoss) | 10 | 0.2 |
Levenberg Marquardt (trainlm) | 10 | 0.00000154 |
Table 3 Contrast of network architectures and training styles for the third output
Training type | Neurons numbers in the hidden layer | Mean square error (MSE) |
MATLAB default | 20 | 1202.84 |
MATLAB default | 15 | 544.68 |
MATLAB default | 10 | 252.31 |
Gradient descent training (traingd) | 10 | 584698785.16 |
Gradient descent with momentum training (traingdm) | 10 | 3602804076.61 |
Variable learning rate (traingda) | 10 | 10315158.62 |
Variable learning rate (traindgx) | 10 | 3701847.85 |
Resilient back propagation (trainrp) | 10 | 280775.64 |
Conjugate gradient (traincgf) | 10 | 1832222.88 |
Conjugate gradient (traincgp) | 10 | 389779.9 |
Conjugate gradient (traincgb) | 10 | 362442.54 |
Conjugate gradient (trainscg) | 10 | 337732.16 |
Quasi-Newtonian (trainbfg) | 10 | 3865188.29 |
Quasi-Newtonian (trainoss) | 10 | 4846449.38 |
Levenberg Marquardt (trainlm) | 10 | 574.16 |
Table 4 Contrast of network architectures and training styles for the fourth output
Training type | Neurons numbers in the hidden layer | Mean square error (MSE) |
MATLAB default | 20 | 1584.15 |
MATLAB default | 15 | 489.62 |
MATLAB default | 10 | 171.93 |
Gradient descent training (traingd) | 10 | 24445553224.9 |
Gradient descent with momentum training (traingdm) | 10 | 5819427709.58 |
Variable learning rate (traingda) | 10 | 50957217.11 |
Variable learning rate (traindgx) | 10 | 47142966.22 |
Resilient back propagation (trainrp) | 10 | 3551275.95 |
Conjugate gradient (traincgf) | 10 | 2304328.88 |
Conjugate gradient (traincgp) | 10 | 49396333.66 |
Conjugate gradient (traincgb) | 10 | 66943211.48 |
Conjugate gradient (trainscg) | 10 | 6070721.82 |
Quasi-Newtonian (trainbfg) | 10 | 5272623.91 |
Quasi-Newtonian (trainoss) | 10 | 40014703.3 |
Levenberg Marquardt (trainlm) | 10 | 64.295 |
4 genetic ALGORITHMS
Influenced by the evolutionary manner in nature, the Genetic Algorithm (GA) solves problems. Like in nature, the GA makes a population of creatures and obtains an optimum set or an optimum creature through certain operations. According to the GA structure, after recognizing and modeling the problem, and forming the initial population to achieve the final solution, namely the optimal values for the problem parameters, an iterative process should be repeated to meet the end conditions. Given the problem geometry, a multi-objective optimization problem is solved in this study by the GA with specifications in “Table 5”. The GA forms an elementary population of N random arrangements [7-9]:
(3)
(4)
(5)
(6)
(7)
This is moreover part of an idealized model of a rudimentary GA. The potentiality of “losses” and “gains” for the string Z = 000 is calculated below, while PI0 = 1 signifies the possibility of crossover [10].
(8)
(9)
Table 5 The GA specifications
Specifications | Factor |
Number of variables | 3 |
The upper limit of variations | 60, 0.175, 230 |
The lower limit of variations | 15, 0.05, 200 |
Population size | 50 |
Display level | ‘iter’ |
Plot function (plotFcn) | ‘gaplotparet’ |
Pareto fraction | 0.6 |
5 resultS and discussion
As previously mentioned, this study aimed to design and apply an ANN on data extracted from modeling and analyzing the 60 MW combined heat and power generation power plant. The error backpropagation network was selected as the optimal network for this purpose. The generator load or capacity, the condenser pressure, and the Feedwater temperature were considered inputs to the neural network. The energy and exergy efficiencies of the power plant and the overall exergy and energy losses of the cycle were considered the ANN outputs. The results obtained from the selected network are analyzed in the following sections.
5.1. The Performance Diagram of The Optimal Network
To ensure the accuracy of the designed neural network, besides the training dataset for the network, a dataset is automatically used for validation and some data in the test dataset to examine error variations. When a network is well trained, the error decreases in both the test and validation datasets and error propagation ends. The performance or efficiency diagrams versus the epoch show error variations in datasets. As shown in “Figs. 2 to 5”, the most excellent network appearance for the first output, i.e., the energy efficiency, is obtained at epoch 724 with an MSE of 0.00000215.
Fig. 2 The optimal performance of the network for the first output.
Fig. 3 The optimal performance of the network for the second output.
It was seen that the best network performance for the second (exergy efficiency), third (energy losses), and fourth outputs is obtained at epochs 501, 92, and 466 with an MSE of 0.00000154, 574.16, and 64.295.
Fig. 4 The optimal performance of the network for the third output.
Fig. 5 The optimal performance of the network for the fourth output.
5.2. The Regression Diagram for The Optimal Network
A major challenge facing linear regression is to reduce diversity among the predicted & observed values in existing data. A lower difference indicates better consistency between the predicted and observed values so that the regression approaches a desired value of unity. Figures 6 to 9 show the regression diagrams of the optimal network for the first to fourth outputs. As shown in “Figs. 6 to 9”, three regression diagrams are plotted to separately analyze the accuracy. The 4th diagram tests the three datasets together to plot the accuracy diagram. For all four outputs, any dataset with a regression value of 1 shows a very good accuracy suggesting that the designed neural network was well and adequately trained and well predicted all untrained points. Consequently, the network is reliable with sufficient accuracy to analyze the study results.
Fig. 6 Regression chart at first output of the optimum network.
Fig. 7 Regression chart at the second output of the optimum network
5.3. The Network Consequences at Trained Points
Figures 10 and 11 display the results for the optimal neural network. As shown, the power plant efficiency increases with increasing the generator load as it approaches the production capacity of the power plant (60 MW). The maximum exergy and energy efficiencies are obtained at full loading of the cycle when the generator is at a maximum capacity of 60 MW. Peaks appear with increasing the Feedwater temperature.
Fig. 8 Regression chart at the third output of the optimum network.
Fig. 9 Regression chart at the fourth output of the optimum network.
Fig. 10 Energy efficiency against the load at various Feedwater temperatures.
Fig. 11 Exergy efficiency against load at various Feedwater temperatures.
As expected, the exergy and energy losses of the cycle increase with increasing the production load of the power plant. Figures 12 and 13 show an uptrend for both energy and exergy losses with increasing the generator capacity. The peaks decrease with increasing the Feedwater temperature. Fuel consumption by the boiler decreases with increasing the water temperature in the boiler improving the cycle performance while reducing the energy and exergy losses.
Fig. 12 Total energy losses against the load at various Feedwater temperatures.
Fig. 13 Total exergy losses against load at various Feedwater temperatures.
Energy losses also increase despite an increase in the energy and exergy efficiencies at the maximum load of the power plant equipment and the generator. The energy and exergy losses could be decreased up to a certain limit by increasing the Feedwater temperature using specific equipment under certain conditions.
5.4. Application of the Designed Neural Network at New Points
In addition to the optimal performance at trained points, an optimal neural network must also show an acceptable performance at untrained points. To this end, the network was tested at new points and the results are displayed for the four outputs respectively in “Figs. 14 to 17”. These results are consistent with those obtained for the trained points in “Figs. 10 to 13”. According to “Figs. 14 and 15”, the peaks for the first and second outputs occur at the maximum load and Feedwater temperature. The energy and exergy efficiencies increase as the Feedwater temperature and the production load increase.
Fig. 14 Effect of load and Feedwater temperature on the first output.
Fig. 15 Effect of load and Feedwater temperature on the second output.
As shown in “Figs. 16 and 17”, the peaks for the third and fourth outputs occur at the maximum load factor and minimum Feedwater temperature. Thus, more energy and exergy are lost at lower Feedwater temperatures, because more fuel is consumed or more energy and exergy are consumed from the thermodynamic point of view when low-temperature water enters the boiler. Certainly, the consumed energy and exergy are not totally useful and losses increase with increasing fuel consumption. Notably, with increasing the Feedwater temperature, energy losses sharply decline even at the maximum generator capacity. However, the exergy losses slowly decrease with increasing the Feedwater temperature at the full load of the power plant.
Fig. 16 Effect of load and Feedwater temperature on the third output.
Fig. 17 Effect of load and Feedwater temperature on the fourth output.
Figures 18 to 21 display the effect of the condenser pressure and generator capacity at a constant Feedwater temperature on the energy and exergy efficiencies and the total energy and exergy losses. As shown in “Figs. 18 and 19”, the utmost energy and exergy efficiencies are achieved by decreasing the condenser pressure and increasing the generator capacity. As the condenser pressure decreases, the vacuum at the end of the turbine increases and end-stage vapors are sucked into the condenser. This process facilitates the movement of steam along the turbine and reduces the resistance to steam movement. Also, a larger share of steam energy is converted into electrical energy. As shown in “Figs. 20 and 21”, the energy and exergy losses decrease with a relatively same slope with decreasing the pressure condenser and load factor. In other words, the energy lost at higher condenser pressures can be converted into useful work, and the energy and exergy losses can be minimized by reducing the condenser pressure.
Fig. 18 Effect of load and condenser pressure on the first output.
Fig. 19 Effect of load and condenser pressure on the second output.
Fig. 20 Effect of load and condenser pressure on the third output.
Fig. 21 Effect of load and condenser pressure on the fourth output.
Fig. 22 The error chart for the initial output
Fig. 23 The histogram for the initial output.
5.5. Network Error
As mentioned in the network accomplishment part, the MSE of the network varies with the network architecture and training method. Moreover, the number of data, data scattering, and classification affect the network error and the study results. Figures 22 to 29 show the error and histogram diagrams for all four outputs of the network.
Fig. 24 The error chart for the second output.
Fig. 25 The histogram for the second output.
Fig. 26 The error chart for the third output.
Fig. 27 The histogram for the third output.
Fig. 28 The error chart for the fourth output.
Fig. 29 The histogram for the fourth output.
As shown in the error diagrams, data discretization and scattering have good density and accuracy in the designed neural network. However, the histograms show that the gaps where data are not available cause an error in the network.
5.6. Optimization of Results by The Genetic Algorithm
The GA aims to discover the optimum benefits for the problem outputs. In multi-objective optimization when the outputs are competing, by overcoming some points over other ones, an optimum curve named the Pareto front is obtained by the GA. This diagram is a set of optimal solutions from which the best solution can be selected according to the problem geometry and limitations. Figures 30 and 31 show the optimal solutions for the first and second, and the third and fourth outputs, respectively.
Fig. 30 The Pareto front diagram for the first and second outputs.
Fig. 31 The Pareto front diagram for the third and fourth outputs.
This study aims to achieve the maximum efficiency and the minimum energy and exergy losses. Accordingly, from the optimal solutions obtained by the GA, the best solutions in the output space were obtained in the 23rd Pareto where the energy efficiency, exergy efficiency, energy losses, and exergy losses are 28.4%, 27.2%, 2539.2 kW, and 71963.7 kW, respectively. The best solutions in the input space are 229.99˚C, 0.0502 bar, and 31.67 MW, respectively for the Feedwater temperature, the condenser pressure, and the optimal generator capacity. The optimal solutions are listed in “Table 6”.
Table 6 The optimal points in the input and output spaces obtained from the GA
Number | Load [MW] | Energy Efficiency [%] | Exergy Efficiency [%] | Energy Losses [KW] | Exergy Losses [KW] |
1 | 59.305 | 29.547 | 28.357 | 5048.292 | 134664.984 |
2 | 30.405 | 28.197 | 27.023 | 2428.674 | 69109.436 |
3 | 59.295 | 29.546 | 28.356 | 5054.429 | 134647.379 |
4 | 16.395 | 25.346 | 24.662 | 1327.448 | 37166.334 |
5 | 24.117 | 26.539 | 26.254 | 1909.838 | 54732.801 |
6 | 35.468 | 28.748 | 27.439 | 2860.485 | 80605.337 |
7 | 17.621 | 25.426 | 24.983 | 1421.203 | 39926.466 |
8 | 55.038 | 29.404 | 28.241 | 4627.773 | 124975.984 |
9 | 27.499 | 27.554 | 26.732 | 2171.399 | 62410.855 |
10 | 59.305 | 29.546 | 28.357 | 5047.985 | 134667.348 |
11 | 15.001 | 25.283 | 24.263 | 1217.345 | 34044.880 |
12 | 42.403 | 29.013 | 27.806 | 3444.052 | 96351.277 |
13 | 50.942 | 29.275 | 28.122 | 4224.772 | 115679.178 |
14 | 45.827 | 29.115 | 27.945 | 3747.631 | 104104.766 |
15 | 47.097 | 29.148 | 27.986 | 3865.241 | 107009.842 |
16 | 15.328 | 25.262 | 24.327 | 1248.711 | 34824.533 |
17 | 26.042 | 27.107 | 26.526 | 2058.240 | 59149.330 |
18 | 44.266 | 29.071 | 27.887 | 3605.605 | 100558.348 |
19 | 56.132 | 29.441 | 28.272 | 4732.199 | 127454.901 |
20 | 41.113 | 28.975 | 27.750 | 3331.074 | 93422.115 |
21 | 22.088 | 26.037 | 25.927 | 1756.52 | 50074.815 |
22 | 43.992 | 29.065 | 27.878 | 3581.557 | 99930.933 |
23 | 31.669 | 28.401 | 27.147 | 2539.171 | 71963.704 |
24 | 38.394 | 28.884 | 27.614 | 3101.342 | 87255.786 |
25 | 19.544 | 25.595 | 25.412 | 1569.709 | 44315.255 |
26 | 51.583 | 29.295 | 28.141 | 4286.518 | 117135.385 |
27 | 49.379 | 29.227 | 28.073 | 4078.969 | 112133.979 |
28 | 53.608 | 29.358 | 28.199 | 4492.298 | 121733.563 |
29 | 39.590 | 28.910 | 27.670 | 3202.974 | 90016.555 |
30 | 52.287 | 29.312 | 28.157 | 4355.397 | 118752.508 |
6 CONCLUSIONS
An ANN was used for better processing the simulation results of a 60 MW combined heat and power generation power plant. The optimum network with proper performance was selected. Through multiple tests, the optimal architecture and training method were determined to ensure the accurate performance of the designed neural network. The results are summarized below:
1. Despite an increase in the energy and exergy efficiencies of the power plant at the maximum load of equipment and generator, the energy and exergy losses also increased. By heating the Feedwater and raising its temperature, the energy and exergy losses can be reduced up to a certain value.
2. With increasing the Feedwater temperature, energy losses sharply declined even at the maximum generator capacity. However, the exergy losses slowly decreased with increasing the Feedwater temperature at the full load of the power plant. Huge amounts of energy and exergy are lost in the boiler when low-temperature water entered the boiler. By raising the Feedwater temperature, energy losses could be compensated even at full load. However, the exergy losses significantly decreased by increasing the Feedwater temperature and decreasing the produced load.
3. Maximum energy and exergy efficiencies were obtained by decreasing the condenser pressure and increasing the generator capacity. On the other hand, the energy and exergy losses decreased with a relatively same slope by decreasing the pressure condenser and load. In other words, the energy losses at higher condenser pressures can be converted into useful work, and the energy and exergy losses can be minimized by reducing the condenser pressure.
4. From the optimal solutions obtained from the GA, the best solutions in the output space were 28.4%, 27.2%, 2539.2 kW, and 71963.7 kW, respectively for the first-law efficiency (energy), second-law efficiency (exergy), energy losses, and exergy losses. The best solutions in the input space were 229.99˚C, 0.0502 bar, and 31.67 MW, respectively for the Feedwater temperature, the condenser pressure, and optimal generator capacity.
References
[1] Kim, M. J., Kim, T. S., Flores R. J., and Brouwer, J., Neural-Network-Based Optimization for Economic Dispatch of Combined Heat and Power Systems, Applied Energy, Vol. 256, 2020.
[2] Nikbakht Naserabad, S., Mehrpanahi, A., and Ahmadi, G., Multi-Objective Optimization of HRSG Configurations on The Steam Power Plant Repowering Specifications, Energy, Vol. 159, 2018, pp. 277-293.
[3] Luo, X. J., Juan Manuel Davila Delgado, A. O., Owolabi, H. A., and Ahmed, A., Genetic Algorithm-Determined Deep Feedforward Neural Network Architecture for Predicting Electricity Consumption in Real Buildings, Energy and AI, Vol. 2, 2020.
[4] Li, H., Zhen-yu, Zh., The Application of The Immune Genetic Algorithm in Main Steam Temperature of PID Control of BP Network, Physics Procedia, Vol. 24A, 2012, pp. 80-86.
[5] Hosseinalipour, S. M., Mehrpanahi, A., and Mobini, K., Full Repowering to Enhance the Technical-Economic Specifications of a Steam Power Plant, Mechanical Engineering Journal, Tarbiat Modarres University, Vol. 11, No. 1, 2011, pp. 1-18.
[6] Mehrpanahi, A., Hosseinalipour, S. M., and Seijanivandi, S., Multi-Objective Optimization of Parallel Feedwater Heating Repowering of a Steam Power Plant by The Genetic Algorithm, Amir Kabir Journal (Mechanical Engineering), Vol. 45, No. 1, 2013, pp. 93-108.
[7] Holland, J. H., Adaptation in Natural and Artificial Systems, Ann Arbor, MI: University of Michigan Press, 1975.
[8] Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley, 1989.
[9] Montana, D. J., Davis, L., Training Feedforward Neural Networks Using Genetic Algorithms, In Proceedings of the 11th International Joint Conference on Artificial Intelligence, Morgan Kaufmann, San Mateo, CA, Vol. 1, 1989, pp. 762–767.
[10] Whitley, D. A., Genetic Algorithm Tutorial, Stat Comput., Vol. 4, No. 2, 1994, pp. 65-85.
[11] Nguyen Q., et al., Performance of Joined Artificial Neural Network and Genetic Algorithm to Study the Effect of Temperature and Mass Fraction of Nanoparticles Dispersed in Ethanol. Mathematical Methods in the Applied Sciences, 2020, DOI: 10.1002/mma.6688.
[12] Ghorbani, P., Smida, K., Razzaghi, M. M., Yazd, M. J., Sajadi, S. M., Bagherzadeh, S. A. and Inc, M., Modeling and Thermoeconomic Analysis of a 60 MW Combined Heat and Power Cycle Via Feedwater Heating Compared to A Solar Power Tower, Sustainable Energy Technologies and Assessments, Vol. 54, 2022, pp. 102861.
COPYRIGHTS
© 2023 by the authors. Licensee Islamic Azad University Isfahan Branch. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution 4.0 International (CC BY 4.0) (https://creativecommons.org/licenses/by/4.0/)