Electrical load forecasting is the prediction of future demands based on various data and factors containing different consumptions on weekdays, electricity prices and weather conditions that are different for societies and places. Generally, medium-term electrical load
More
Electrical load forecasting is the prediction of future demands based on various data and factors containing different consumptions on weekdays, electricity prices and weather conditions that are different for societies and places. Generally, medium-term electrical load forecasting is often used for the operation of thermal and hydropower plants, optimal time planning for maintenance of power plants and the power grids. However, long-term electrical load forecasting is used to manage on-time future demands and generation, transmission and distribution expansion planning. In this paper, a hybrid long-term load forecasting approach using wavelet transform and an outlier robust extreme learning machine is proposed. Hourly load and temperature data were extracted from the GEFCOM 2014 database and divided into two classes of training and test. The one-level wavelet transform is used to decompose data to extract properties and reduce the dimensions of the data matrix. Decomposed low-frequency component (approximations) and high-frequency component values (details) from wavelet analysis are entered into the model for training and forecasting. For comparison accuracy of the proposed method, wavelet transform is applied to the data for the other three extreme learning machines. Also data without wavelet transform entered into four other forecasting models and the load forecasting results are compared with the proposed method. The results of the above mentioned evaluation show that electrical load forecasting by using wavelet transform and outlier robust extreme learning machine improves forecasting accuracy and the MAPE reduces to 3.0966. The overall calculated error by the proposed method was the best result obtained between the three several models of extreme learning machines and without preprocessing model. The MAPE is 0.4208 less than the ELM, 0.944 less than the RELM, and 0.1353 less than the WRELM model, respectively.
Manuscript profile