• Home
  • بردار پشتیبان
    • List of Articles بردار پشتیبان

      • Open Access Article

        1 - Credit Facilities Applicants Classification by SVM
        A. Toloei ashlaghi H. Nikoomaram F. i Maghdoori sharabian
        In the banking industry, one issue that must always be considered by the credit policy makers is riskmanagement. Among various risks which banks are dealing with, credit risk is most important. It iscaused by the losses of disability or lack of tendency of borrowers to More
        In the banking industry, one issue that must always be considered by the credit policy makers is riskmanagement. Among various risks which banks are dealing with, credit risk is most important. It iscaused by the losses of disability or lack of tendency of borrowers to pay their credit obligations. Tomanage and control the mentioned risk, classification systems are undeniable requirement. Suchsystems, according to existent documents and information, determine the class of customers. It isevident that use of these systems helps bank to choose customers in a good way and through thecontrol and reduction the credit risk, improves efficiency level of providing bank facilities. In Thisresearch artificial intelligent based classification model consist of support vector machine is used topredict bank legal customers financial performance. Indeed, in this paper SVM is used with othermechanisms like F-score and Grid search to increase the accuracy of the model and classify the legalcustomers. The results justify the improvements in the classification accuracy and demonstrate thatSVM can provide the better accuracy than other models Manuscript profile
      • Open Access Article

        2 - Using neural network approach to predict company’s profitability and comparison with decision tree c5 and support vector machine (svm)
        Malihe Habibzade Mostafa Ezadpour
        Profit as one of the most important indicators of measuring the performance of the economic unit is one of the important accounting issues that has a high status due to the competitive environment and the importance of quick and proper decision making by managers. There More
        Profit as one of the most important indicators of measuring the performance of the economic unit is one of the important accounting issues that has a high status due to the competitive environment and the importance of quick and proper decision making by managers. Therefore, it is important to analyze the index, factors affecting it and predict profitability. In this regard, the present study was conducted by selecting a sample of 124 observations for the period from 1387 to 1395, based on the basic information of the companies financial statements; the effect of 34 variables on the accuracy of predicting the profitability of the accepted companies by Tehran stock exchange, has been investigated. Tree C5 method was used to determine the significant variables in predicting profitability due to the high ease of understanding of the model. Finally, after determining the effective variables and identifying 8 variables, the accuracy of the predictions was measured using the neural network technique, the C5 decision tree and the backup vector machine (SVM), and the results from these three algorithms were compared. The results of the comparison show that using the c5 decision tree and the 8 variables have the best prediction with accuracy of 93.54%, and then the neural network model is 81.45% more accurate than the supported vector machine (69.35%) and has an error. Manuscript profile
      • Open Access Article

        3 - Forecasting the Type of Audit Opinions: A Data Mining Approach
        محمدحسین ستایش فهیمه ابراهیمی سیدمجتبی سیف مهدی ساریخانی
        Data mining methods can be used to assist auditors with providing audit opinions.The purpose of this research is to forecast the type of audit opinions using data miningmethods and compare the performance of these methods. Artificial neural networks,support vector machi More
        Data mining methods can be used to assist auditors with providing audit opinions.The purpose of this research is to forecast the type of audit opinions using data miningmethods and compare the performance of these methods. Artificial neural networks,support vector machines, nearest neighbors and decision tree methods were used toconduct the research. The sample consists of 842 observations between 2001 and 2010.The observations were divided in two groups: one group for training and the other forassessment of the method. A comparison of the performance of methods indicates thatsupport vector machines approach outperforms the other approaches with a predictiveability of 76%. Also measuring type I and type II error rates of each method shows thatthe performance of support vector machines is higher than the other methods. Manuscript profile
      • Open Access Article

        4 - Performance Evaluation of M5 Tree Model and Support Vector Regression Methods in Suspended Sediment Load Modeling
        Mohammad Taghi Sattari علی رضازاده جودی Forugh Safdari فراز قهرمانیان
        Sediment transport has always affected the river and civil structures and the lack of knowledge about its exact amount causes high damages. Therefore, it is very important to properly estimate the sediment load in rivers in terms of sediment, erosion and flood control. More
        Sediment transport has always affected the river and civil structures and the lack of knowledge about its exact amount causes high damages. Therefore, it is very important to properly estimate the sediment load in rivers in terms of sediment, erosion and flood control. This study used two new data mining methods including M5 model tree and support vector regression comparing with the classic method of sediment rating curve to estimate the suspended sediment load in Aharchay River. To assess the performance of the used methods, three criteria including the correlation coefficient, root mean square error and mean absolute error were used. Analyzing the sensitivity of models to the input variables, it was found that the variable of flow discharge in the current month had the greatest effect on the amount of suspended sediment load. The results showed the high accuracy of new data mining methods in comparison with the sediment rating curve. Although, both considered data mining methods had more accuracy and less error compared to the conventional sediment rating curve, given the simple understandable linear relationships provided by the M5 model tree, this method is recommended for similar cases. Manuscript profile
      • Open Access Article

        5 - Autoregressive simulation of Zarrinehrud river basin runoff using Procrustes analysis method and artificial neural network and support vector machine models
        بهروز سبحانی Mohammad Isazadeh منیر شیرزاد
        Rivers flow prediction in river basins has an important role in the operation and correct management of water resources. Determining type and number of estimator models inputs is one of the important steps in rivers flow prediction. Therefore, The Procrustes analysis (P More
        Rivers flow prediction in river basins has an important role in the operation and correct management of water resources. Determining type and number of estimator models inputs is one of the important steps in rivers flow prediction. Therefore, The Procrustes analysis (PA) method for determining the number of effective inputs was used. In this study, flow prediction was done using the flow data obtained from the Safakhaneh and Santeh hydrometric stations. The Artificial Neural Network (ANN) and The Support Vector Machine (SVM) models was used for flow prediction. The best estimation of flow is done using the MLP and SVM models in Safakhaneh hydrometric station with RMSE equal to 5.68 (m3/s) and 4.85 (m3/s), respectively, and CC equal to 0.73 and 0.78, respectively. While in Santeh hydrometric station RMSE was equal to 6.44 (m3/s) and 6.36 (m3/s) respectively, and CC was equal to 0.78 and 0.79 respectively for MLP and SVM models. PA-SVM model showed better results than SVM model in estimating Safakhaneh hydrometric stations flow with RMSE equal to 5.45 (m3/s) and CC equal to 0.73 during the test period. The results also indicated that SVM and PA-SVM models estimated the flow of Santeh station with RMSE equal to 6.85 (m3/s) and 7.03 (m3/s) respectively. Basically, results indicated that the Procrustes analysis method can be used as one of the Efficient and suitable methods for determining the number of effective inputs. Comparison of the ANN and SVM results indicated that ANN model has more accuracy than SVM model.  Manuscript profile
      • Open Access Article

        6 - Evaluation of wavelet – least square support vector machine hybrid model to rainfall time series spatiotemporal disaggregation
        nima farboudfam Vahid Nourani babak aminnejad
        The need to simulate rainfall time series at different scales for engineering purposes on the one hand and lack of recording such parameters in small scales because of administrative and economic problems, on the other hand, disaggregation of rainfall time series to the More
        The need to simulate rainfall time series at different scales for engineering purposes on the one hand and lack of recording such parameters in small scales because of administrative and economic problems, on the other hand, disaggregation of rainfall time series to the desired scale is an essential topic. In this study, for disaggregating the Tabriz and Sahand rain gauges time series, according to nonlinear characteristics of time scales, wavelet- Least Square Support Vector Machine (WLSSVM) hybrid model is proposed and daily data of four rain gauges and monthly data of six rain gauges from Urmia Lake Basin for ten years were decomposed with wavelet transform and then by using mutual information and correlation coefficient criteria, the subseries were ranked and superior subseries were used as input data of Least Square Support Vector Machine (LSSVM) model for disaggregating the Tabriz and Sahand rain gauges monthly rainfall time series to the daily time series. Results obtained from the WLSSVM disaggregation model were compared with the results of LSSVM and traditional multiple linear regression models. The results of WLSSVM model to LSSVM and multiple linear regression models at validation stage in the optimized case for Tabriz rain gauge were increased 10% and 37.5% and in the optimized case for Sahand rain gauge were increased 24.5% and 46.7% respectively. It was concluded that hybrid WLSSVM model has a higher accuracy than two other methods and can be considered as an accurate disaggregation model to disaggregate the rainfall time series. Manuscript profile
      • Open Access Article

        7 - Application of hybrid ARIMA and support vector regression model for improvement of time series forecasting
        Laleh Parviz Bahareh Saeedabdi
        Accurate investigation related to the structure of time series plays an important role in increasing the accuracy of ARIMA forecasting. The aim of this research is to investigate the effect of modeling decomposition of linear and non linear parts of time series on ARIMA More
        Accurate investigation related to the structure of time series plays an important role in increasing the accuracy of ARIMA forecasting. The aim of this research is to investigate the effect of modeling decomposition of linear and non linear parts of time series on ARIMA model results. The decomposition of wheat and maize yield time series (in Kermanshah and Esfahan provinces) in the linear part was related to ARIMA and in the non linear part was conducted with support vector regression (hybrid model). The kind of configuration of non linear part of hybrid model is more important for example in the maize time series of Kermanshah, the values of RMSE for configuration with residual was 1.52 and for time series configuration was 15.03. The decreasing of RMSE, MAE and UII for wheat time series of Esfahan with hybrid model was 45.94%, 52.29% and 46%, respectively which is indicative of hybrid model improvement. The value of GMER in all four time series was greater than one which indicates the overestimation of hybrid model. Comparison the average of each criteria with two models and crops in each province indicated the effect of climate on modeling process because the average of criteria in Esfahan province decreased rather to Kermanshah (RMSE decreasing= 24.72%, UII decreasing=12.24%). Therefore, decomposition of time series to linear and non linear parts of time series can increase the accuracy of ARIMA model results. Manuscript profile
      • Open Access Article

        8 - Comparison of Data Mining Models Performance in Rainfall Prediction Using Classification Approach (Case Study: Hamedan Airport Synoptic Weather Station)
        Morteza Salehi Sarbijan Hamidreza Dezfoulian
        Background and Aim: Rainfall is one of the complex natural phenomena and one of the most crucial component of the water cycle, playing a significant role in assessing the climatic characteristics of each region. Understanding the amount and trends of rainfall changes is More
        Background and Aim: Rainfall is one of the complex natural phenomena and one of the most crucial component of the water cycle, playing a significant role in assessing the climatic characteristics of each region. Understanding the amount and trends of rainfall changes is essential for effective management and more precise planning in agricultural, economic, and social sectors, as well as for studies related to runoff, droughts, groundwater status, and floods. Additionally, rainfall prediction in urban areas has a significant impact on traffic control, sewage flow, and construction activities. Method: The objective of this study is to compare the accuracy of classification models, including Chi-squared Automatic Interaction Detector (CHAID), C5 decision tree, Naive Bayes (NB), Quest tree, and Random Forest, k-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Artificial Neural Network (ANN) in predicting rainfall occurrence using 50 years of data from the synoptic station at Hamedan Airport. In this study, 80% of the data is used for training the models, and 20% for model validation and the results obtained from the model executions are compared using metrics such as confusion matrix, Receiver Operating Characteristic (ROC) curve, and the Area Under the Curve (AUC) index. To create the classification variable for rainfall and non-rainfall data, based on rainfall data, the days of the year are categorized into two classes: days with rainfall (y) and days without rainfall (n). Data preprocessing is performed using Automatic Data Preprocessing (ADP). Then, Principal Component Analysis (PCA) is employed to reduce the dimensions of the variables. Results: In this study, the PCA method reduces the dimensions of the variables to 5. Also, approximately 80% of the available data corresponds to rainless days, while 20% corresponds to rainy days. The research results indicated that the KNN model with an accuracy of 91.9% for training data and the SVM model with 89.13% for test data exhibit the best performance among the data mining models. The AUC index for the KNN model is 0.967 for training data and 0.935 for test data, while for the SVM algorithm, it is 0.967 for training data and 0.935 for test data. According to the ROC curve for Hamedan rainfall data, the KNN model outperforms other models. Considering the sensitivity index in the confusion matrix, the KNN and SVM models perform better in predicting non-rainfall occurrence for training data. In terms of the precipitation occurrence prediction, the RT and KNN models show better results according to the specificity index. Conclusion: The results demonstrated that for the RT, C5, ANN, SVM, BN, KNN, CHAID, QUEST, accuracy metrics was obtained 86.82%, 89.78%, 89.55%, 89.96%, 88.06%, 91.9%, 88.29%, 87.46%, 91.9%, respectively for training data. Moreover, for test data, the accuracy metrics for this model was obtained 83.82%, 87.9%, 88.12%, 89.13%, 87.12%, 89.13%, 87.12%, 88.19%, 86.93%, 86.76%, respectively. The AUC index in the training data for RT, C5, ANN, SVM, BN, KNN, CHAID QUEST models was 0.94%, 0.99%, 0.94%, 0.94%, 0.93%, 0.97%, 0.93%, 0.89%, respectively. In addition, for the test data, this metric was evaluated 0.89%, 0.89%, 0.93%, 0.94%, 0.92%, 0.90%, 0.92%, 0.88% respectively. As observed, considering accuracy metric and AUC index for training data KNN model and for test data SVM model were more sufficient in rainfall prediction.  Manuscript profile
      • Open Access Article

        9 - Drought Forecasting Using Wavelet - Support Vector Machine and Standardized Precipitation Index (Case Study: Urmia Lake-Iran)
        Mehdi Komasi Soroush Sharghi
        Background and Objectives: Drought is regarded as a serious threat for people and environment. As a result, finding some indices to forecast the drought is an important issue that needs to be addressed urgently. The appropriate and flexible index for drought classificat More
        Background and Objectives: Drought is regarded as a serious threat for people and environment. As a result, finding some indices to forecast the drought is an important issue that needs to be addressed urgently. The appropriate and flexible index for drought classification is the Standardized Precipitation Index (SPI). Artificial intelligence models were commonly used to forecast SPI time series. These models are based on auto regressive property. So, they are not able to monitor the seasonal and long-term patterns in time series. In this study, the Wavelet-Support Vector Machine (WSVM) approach was used for the drought forecasting through employing SPI. Method: In this way, the SPI time series of Urmia Lake watershed was decomposed to multiple frequent time series by wavelet transform; then, these time series were imposed as input data to the Support Vector Machine (SVM) model to forecast the drought. Findings: The results showed that, the maximum value of R2 and minimum value of RMSE indexes for SVM model are 0.865 and 0.237 and for WSVM model are 0.954 and 0.056 respectively in verification step. Discussion and Conclusion: So, the propounded hybrid model has superior ability in forecasting SPI time series comparing with the single SVM model and also it can accurately assess the extreme data in SPI time series by considering the seasonality effects. Finally, it was concluded that, the proposed hybrid model is relatively more appropriate than classical autoregressive models such as ANN.   Manuscript profile
      • Open Access Article

        10 - Forecasting Municipal Solid Waste Quantity by Intelligent Models and Their Uncertainty Analysis
        Maryam Abbasi Malihe Fallah Nezhad Rooholah Noori Maryam Mirabi
        Background and Objective: The first step in design of municipal waste management systems is complete understanding of waste generation quantity. Forecasting waste generation is one of the most complex engineering problems due to the effect of various and out of control More
        Background and Objective: The first step in design of municipal waste management systems is complete understanding of waste generation quantity. Forecasting waste generation is one of the most complex engineering problems due to the effect of various and out of control parameters on waste generation. Therefore, it is obvious that it is necessary to develop approaches to a model such complex events. The objective of this study is forecasting waste generation quantity using intelligent models as well as their comparisons and uncertainty analysis.Method: In this study, Mashhad city was selected as a case study and waste generation time series of waste generation in 1380 to 1390 were used for weekly prediction. Intelligent models including artificial neural network, support vector machine, adaptive neuro-fuzzy inference system as well as K-nearest neighbors were used for modelling. After optimizing the models’ parameters, models’ accuracy were compared by statistical indices. Finally, result uncertainty of the models was done by Mont Carlo technique.Findings: Results showed that coefficient of determination (R2) of artificial neural network adaptive neuro-fuzzy inference system, support vector machine, and K-nearest neighbor models were 0.67, 0.69, 0.72 and 0.64 respectively. Uncertainty analysis was also justified the results and demonstrates that support vector machine model had the lowest uncertainty among other models and the lowest sensitivity to input variables.Conclusion: Intelligent models were successfully able to forecast waste quantity and among the studied models, support vector machine was the best predictive model. Moreover, support vector machine produced the results with the lowest uncertainty the other models. Manuscript profile
      • Open Access Article

        11 - Estimation of Aquifer Qualitative Parameters in Guilans Plain Using Gamma Test and Support Vector Machine and Artificial Neural Network Models
        Mohammad Isazadeh seyedmostafa Biazar Afshin Ashrafzadeh Rezvan Khanjani
        Abstract Background and Objective: Having information about qualitative and quantitative parameters distribution of groundwater supplies is one of most important parameters in integrated groundwater management. Thus, in this study it has been attempted to determine a pr More
        Abstract Background and Objective: Having information about qualitative and quantitative parameters distribution of groundwater supplies is one of most important parameters in integrated groundwater management. Thus, in this study it has been attempted to determine a proper model and input combination for estimation of quality parameters including electrical conductivity (EC), calcium (Ca) and sodium (Na) ions in aquifers of Guilans plain. Method: In this study, the data from 132 observation wells during 2001 to 2013 were used and artificial neural network (ANN) and support vector model (SVM) were applied. In the first approach, estimations were conducted according to five different combinations, including water level, distance from see, total precipitation of six months and coordinates of observation wells. In the second approach, estimations were conducted based on combination of the selected qualitative parameters of gamma test with combinations of the best input in the first part. Findings: Comparison of the results from the first part indicated that SVM model outperformed the ANN mode in the estimation of Ca, Na and EC parameters. Support vector machine error values for estimating Ca, Na and EC variables at the test period were 1.218 (meq/l), 0.867(meq/l), and 175.742 (µmos/cm), while for artificial neural network these values were 1.268 (meq/l), 0.933 (meq/l), and 186/448 (µmos/cm) respectively. The results from this part showed that adding the distance from see input improves the estimation of models in all cases. In the second part, using gamma test for measuring the nine quality parameters, the best combination of quality parameters was determined to estimate the three parameters: Ca, Na and EC. The results from the second part show that both ANN and SVM models have an excellent performance in the estimation of the three qualitative parameters. ANN model error values in estimating Ca, Na and EC variables in validation period were 0.662 (meq/l), 0.305(meq/l), and 47.346 (µmos/cm), while these values were 0.671 (meq/l), 0.356 (meq/l), and 55.412 (µmos/cm) for SVM model respectively.  Obviously, the results from ANN model in this section were better than those from SVM model. Discussion and Conclusion:Results showed that both ANN and SVM models have a great ability in predicting qualitative parameters in the aquifers. Also, in less inputs, the results of SVM model are better than those of ANN model and in more inputs it is vice versa. Results of the second section showed that gamma test is fully practical and accurate in determining the effective input combinations. Manuscript profile
      • Open Access Article

        12 - Presenting and explaining a model to create the value of the company according to the role of accounting standards management, financial reporting quality and audit quality using meta-innovative models
        saman khorshid yahya kamyabi mehdi khalilpour
        In the world of investment, decision making is the most important part of the investment process, in which investors need to make the most optimal decisions in order to achieve their maximum benefits and wealth. In this regard, the most important factor in the decision- More
        In the world of investment, decision making is the most important part of the investment process, in which investors need to make the most optimal decisions in order to achieve their maximum benefits and wealth. In this regard, the most important factor in the decision-making process is information. Information can have a significant impact on the decision-making process. Because it makes different decisions in different people. In the stock market, investment decisions are also affected by information. Therefore, this study seeks to provide and explain a model to create the value of the company according to the role of management of accounting standards, financial reporting quality and audit quality using meta-innovative models. To achieve this goal, the data of 101 companies listed on the Tehran Stock Exchange during the period 1392 to 1397 were collected, and the optimized algorithm method was used to analyze the data. The research findings indicate that all three meta-functional methods have the power to estimate economic value added and market value added. However, the estimated value of economic value added and market value added in the night cream algorithm is higher than the two decision tree algorithms and the regression machine-supporting algorithm algorithm. Is higher. Manuscript profile
      • Open Access Article

        13 - A hybrid metaheuristic model in the Forex market to optimize investment strategies based on market trend forecasting
        alireza sadeghi mehdi madanchi zaj amir daneshvar
        Determining the appropriate strategy for buying or selling in the foreign exchange market is very important for companies to cover exchange rate fluctuations against the national currency. This study proposes a new approach based on genetic algorithms and support vector More
        Determining the appropriate strategy for buying or selling in the foreign exchange market is very important for companies to cover exchange rate fluctuations against the national currency. This study proposes a new approach based on genetic algorithms and support vector machines for trading in the foreign exchange market.In this research, a new algorithm with the ability to generate technical rules for investment based on forecast certainty is presented. For prediction, a combination of the Combined Support Vector Machine (HSVM) algorithm for classifying the market into three different classes (uptrend, downtrend, sideway) and a dynamic genetic algorithm for optimizing trading rules based on several technical indicators Different has been used. Rials-dollar pair data is used as training and test data for the period between 1392 and 1398. The proposed architecture for machine learning, as well as the implementation and study of the proposed trading system are fully described. The research shows promising results during the test period in which the return on investment was 129%. Manuscript profile
      • Open Access Article

        14 - Optimal Portfolio Selection using Machine Learning Algorithms
        Mohammad baghar yazdani khodashahri Seyed Hossein Naslemousavi Mir Saeid Hoseini Shirvani
        Choosing the right portfolio is always one of the most important issues for investors. The price trend is predicted using technical analysis or basic analysis. Technical analysis focuses on market performance, while the focus of fundamental analysis is on the mechanism More
        Choosing the right portfolio is always one of the most important issues for investors. The price trend is predicted using technical analysis or basic analysis. Technical analysis focuses on market performance, while the focus of fundamental analysis is on the mechanism of supply and demand, and these changes prices. The existence of a solution to predict growth or decrease in stocks has been studied as a basic need in this study. In the present study, with the help of a monitoring dataset, a solution based on Raff collection algorithms and hierarchical analysis to reduce the feature and decision tree algorithms, backup vector machine, and business network have been used for prediction. This proposed solution has been implemented using language and compared with different solutions, and the research results have shown that the proposed method with 80% accuracy of prediction and 20 errors in prediction has the highest accuracy and the lowest error rate among the methods compared. Manuscript profile
      • Open Access Article

        15 - Weekly crude oil price forecasting by hybrid support vector machine model and Autoregressive Integrated Moving Average
        Shapor Mohammadi Reza Raeie Hossein karami
        Fluctuations in crude oil prices in addition to affect the economy of the exporting countries, is one of the sources of disruption in oil-dependent economy. Always predict the price and volatility has been of the challenges facing traders in oil markets and price foreca More
        Fluctuations in crude oil prices in addition to affect the economy of the exporting countries, is one of the sources of disruption in oil-dependent economy. Always predict the price and volatility has been of the challenges facing traders in oil markets and price forecast is raised as an imperative and functional however, should be noted forecasts that will take place in more accurate and less error than the observed actual results. In order to predict the weekly price of Brent crude oil as an oil indicator given the difficulty of accurately identifying linear and nonlinear models in economic and financial time series from combining Autoregressive Integrated Moving Average models (ARIMA) by the assumption that the time series have a linear pattern and support vector machine (SVM) which has great potential in modeling nonlinear model is used to enhance the accuracy of prediction. Given two paired comparison performance criteria of root mean square error test (RMSE) and the mean absolute magnitude percentage error (MDAPE) which are resulting from the predicted values ​​and actual values ​​for each model, this indicates that in most cases the hybrid model provide smaller errors in predicting the future price of crude oil as compared to the individual applications of autoregressive integrated moving average models and the support vector machine. Manuscript profile
      • Open Access Article

        16 - Design of Anomaly Based Intrusion Detection System Using Support Vector Machine and Grasshopper Optimization Algorithm in IoT
        Sepehr Sharifi Soulmaz Gheisari
        Computer networks play an important and practical role in communication and data exchange, and they also share resources with complete ease. Today, various types of computer networks have emerged, one of which is the Internet of Things. In the Internet of Things, networ More
        Computer networks play an important and practical role in communication and data exchange, and they also share resources with complete ease. Today, various types of computer networks have emerged, one of which is the Internet of Things. In the Internet of Things, network nodes can be smart objects, and in this sense, this network has many nodes and there is a lot of traffic in this network. Like any computer network, it faces its own challenges and problems, one of which is the issue of network intrusion and disruption. This dissertation focuses on detecting anomaly-based intrusion into the Internet of Things using data mining. In this study, after collecting and preparing data, the improved support vector machine with grasshopper optimization algorithm is used as a proposed method to detect anomaly-based intrusion in the Internet of Things. The bagging and k-nearest neighbor classifiers and Basic SVM are compared based on error types and standard performance criteria. The simulation results show 97.2% accuracy in the proposed method and better performance compared to other methods. Manuscript profile
      • Open Access Article

        17 - Offering a model for persian texts classify by combination of classification methods
        iman jamali Seyed Javad Mirabedini علی Harounabadi
        To classify text information extraction techniques, natural language processing and machine learning has been widely used general purpose of categories of documents, classified documents in the form of a certain number of categories are pre-determined. Each document can More
        To classify text information extraction techniques, natural language processing and machine learning has been widely used general purpose of categories of documents, classified documents in the form of a certain number of categories are pre-determined. Each document can be in one, several or no category is placed. In the case of any document to this question will be placed the document on which of the categories. This can be in the form of an automatic learning to use it any document can be automatically assigned to a category.     In this thesis, data collection and cleanup after you select text using the normal method of word frequency -inverse document frequency (norm TF-IDF) is the weight features and features in two stages using document frequency (DF) and Chi square (SChi) are selected, and then using principal component analysis (PCA) features reduced dimensions, and at a later stage by combining 21 support vector machine (SVM) the proposed model we have implemented, and the accuracy of the model to assess the 10-step method validation. Experimental results show that this model can text classification accuracy of 91.86 for the seven categories do, which has a higher accuracy than the earlier work done. Manuscript profile
      • Open Access Article

        18 - Camputational methods and quantitative structure–property relationship study for prediction of melting point of carbocyclic nitroaromatic compounds using chemical and quantum mechanics descriptors: combining DFT and QSPR calculations
        mehdi nekoei mehdi maham
        The DFT-B3LYP method, with the base set 6-31G (d), was used to calculate several quantum chemical descriptors of 60 compounds of carbocyclic nitroaromatics. A suitable set of quantum mechanics and chemical descriptors was calculated and quantitative structure–prop More
        The DFT-B3LYP method, with the base set 6-31G (d), was used to calculate several quantum chemical descriptors of 60 compounds of carbocyclic nitroaromatics. A suitable set of quantum mechanics and chemical descriptors was calculated and quantitative structure–property relationship method for prediction of melting point of carbocyclic nitroaromatic compounds by multiple linear regression (MLR) and support vector machine (SVM) were studied. At first, structure of the compounds were plotted and of quantum mechanics descriptors was calculated and the stepwise method was employed to select those descriptors that resulted in the best fitted models. At first, we developed linear model of MLR. Then the selected molecular descriptors were used as inputs for SVM. The obtained results using SVM were compared with MLR which revealed superiority of the SVM model over the MLR method. Manuscript profile
      • Open Access Article

        19 - Quantitative Structure-Property Relationship Study for Prediction of the Solvent Polarity Using Quantum Mechanics Descriptors and Support Vector Machine
        mehdi nekoei بهزاد چهکندی
        Quantitative structure-property relationship (QSPR) study for prediction of the polarity some of solvents using quantum mechanics descriptors and support vector machine. Experimental S′ values for 69 solvents were assembled. This set included saturated and unsatur More
        Quantitative structure-property relationship (QSPR) study for prediction of the polarity some of solvents using quantum mechanics descriptors and support vector machine. Experimental S′ values for 69 solvents were assembled. This set included saturated and unsaturated hydrocarbons, solvents containing halogen, cyano, nitro, amide, sulfide, mercapto, sulfone, phosphate, ester, ether, etc. After drawing the structure of the molecules, the suitable molecular descriptors were calculated. Then, the stepwise multiple linear regressions (SW-MLR) variable selection method was subsequently employed to select and implement the prominent descriptors having the most significant contributions to the polarity of the molecules. At first, multiple linear regressions (MLR) model was constructed. Then, support vector machine (SVM) model was used for to obtain better results. A comparison of results by the two methodologies indicated the superiority of SW-SVM over the SW-MLR method. Manuscript profile
      • Open Access Article

        20 - Modeling and quantitative structure-property relationship (QSPR) study to predict the acidic constants of some chemical compounds using multiple linear regression and support vector machine
        mehdi nekoei Abbass Taheri Majid Mohammadhosseini
        Modeling and studying the structure-property quantitative relationship (QSPR) to predict the acidic constants of some chemical compounds were performed using multiple linear regression (MLR) and support vector machine (SVM). First, the structure of chemical compounds wa More
        Modeling and studying the structure-property quantitative relationship (QSPR) to predict the acidic constants of some chemical compounds were performed using multiple linear regression (MLR) and support vector machine (SVM). First, the structure of chemical compounds was plotted and a suitable group of descriptors was calculated. Then, the step selection method was used to obtain the best descriptors that were most related to the chemical properties of the compounds. Then, linear multiple linear regression (MLR) model and nonlinear vector machine (SVM) model were used to predict the acid constants of the compounds. Statistical data showed that the SVM method was superior to the MLR method. Manuscript profile
      • Open Access Article

        21 - Spectral discrimination of important orchard species using hyperspectral indices and artificial intelligence approaches
        Mohsen Mirzaie Mozhgan Abbasi Safar Marofi Eisa Solgi Roohollah Karimi
        Study spectral reflectance through spectral indices allows the optimal use of the wide range of spectral wavelengths in hyperspectral data. The purpose of this study was to introduce and evaluate the performance of spectral indices to discriminate dominant orchard speci More
        Study spectral reflectance through spectral indices allows the optimal use of the wide range of spectral wavelengths in hyperspectral data. The purpose of this study was to introduce and evaluate the performance of spectral indices to discriminate dominant orchard species in Chaharmahal Bakhtiari province. In this study, 150 spectral curves were measured in the range of 350 to 2500 mm, from grapes, walnuts and almond trees. After the initial correction, 30 of the most important spectral indices were extracted. Analysis of variance and comparisons of meanings was applied to identify the optimal indices for species discrimination at a 99% confidence level. Then, an artificial neural network (ANN) and support vector machine (SVM) approaches were used to evaluate the performance of indices in species discrimination. ANOVA results indicated that the Moisture Stress Index (MSI), Band ratio at 1,200 nm, normalized phaepophytiniz index (NPQI) and cellulose absorption index (CAI) indices are optimal for discrimination of the studied species. The performance evaluation of the introduced indicators in some of the ANN and SVM enhancement structures has been associated with 100% accuracy in both education and testing, which shows the effectiveness of these studies in distinguishing orchard species. The performance evaluation of the introduced indicators has been validated at 100% in both training and testing stages. This result emphasizes the necessity of performing spectroscopic studies to separate the orchard species before analyzing the hyperspectral images due to their large data volume, high cost and huge data analysis. Manuscript profile
      • Open Access Article

        22 - Comparison of different algorithms for land cover mapping in sensitive habitats of Zagros using Sentinel-2 satellite image: (Case study: a part of Ilam province)
        Saeedeh Eskandari
        The western forests and rangelands of Iran in Zagros habitats have mainly been destroyed by various reasons in recent years. The preparation of the land cover map in these sites is the first step to protect them and to prevent further destruction. The aim of this resear More
        The western forests and rangelands of Iran in Zagros habitats have mainly been destroyed by various reasons in recent years. The preparation of the land cover map in these sites is the first step to protect them and to prevent further destruction. The aim of this research was to select the best algorithm for land cover mapping in a part of Ilam site using the Sentinel-2 image. After providing Sentinel-2 the supervised classification of it was performed by seven different algorithms (maximum likelihood, minimum distance from the average, mahalanobis distance, spectral angle mapper, spectral correlation mapper, support vector machine, neural network). For accuracy assessment of the land cover maps, the stratified random points were created and found in the field. In the field visit, after determining the current land cover of each point in the plot area, the real land cover of each point was compared with the defined land cover of the same point in the pixel area based on classification results and the accuracy of the algorithms was evaluated. The results showed that the support vector machine algorithm had the highest accuracy in providing the land cover map with a general accuracy of 79% and a Kappa index of 0.70. The analysis of the land cover map obtained from this algorithm showed that the dense forest area was 319.64 ha, semi-dense forest area was 361.44 ha and sparse forest area was 1832.36 ha from the total area of the study area (16085.31 ha). Also, the rangeland area was 7352.78 ha, the garden area was 62.32 ha, the agricultural area was 658.42 ha and understorey agriculture was 4504.64 ha. For optimal management of this sensitive ecosystem, land cover mapping using this algorithm in certain temporal intervals is essential to investigate the forests and rangelands change and to control the human-made land uses. Manuscript profile
      • Open Access Article

        23 - The effect of kernel optimization in modeling drought phenomenon using computational intelligence (Case study: Sanandaj)
        Jahanbakhsh Mohammadi Alireza Vafaeinezhad Saeed Behzadi Hossein Aghamohammadi Amirhooman Hemmasi
        Drought is one of the most important natural disasters with devastating and harmful effects in various economic, social, and environmental fields. Due to the repetitive behavior of this phenomenon, if the appropriate solutions are not implemented, its destructive effect More
        Drought is one of the most important natural disasters with devastating and harmful effects in various economic, social, and environmental fields. Due to the repetitive behavior of this phenomenon, if the appropriate solutions are not implemented, its destructive effects can remain in the region for years after its occurrence. Most natural disasters, such as floods, earthquakes, hurricanes, and landslides in the short term, can cause severe financial and human damage to society, but droughts are slow-moving and creepy in nature, and their devastating effects appear gradually and over a longer period of time. Therefore, by modeling drought, it is possible to provide plans for drought preparation and reduce the damage caused by it. In this study, computational intelligence algorithms of Multi-Layer Perceptron neural network, Generalized Regression Neural Network, Support Vector Regression with support kernel, and Support Vector regression with the proposed kernel (Support Vector) Regression New kernel has been used to model the drought using the Standardized Precipitation Index. The modeling results, in most cases, showed better performance of the proposed SVR_N model than other models. The values of RMSE and R2 were 0.093 and 0.991, respectively, and the GRNN, MLP, and SVR models performed better in modeling after SVR_N, respectively. Modeling of drought phenomenon in modeling is supported by vector regression method. Manuscript profile
      • Open Access Article

        24 - Three-dimensional calibration of land use changes using the integrated model of Markov chain automatic cell in Gorgan-rud river basin
        Mahboobeh Hajibigloo Vahed berdi Sheikh Hadi Memarian Chooghi Bairam komaki
        Background and ObjectiveLand use/cover changes (LU/LC) are considered as one of the most important issues in natural resource management, sustainable development and the environmental changes on a local, national, regional and global scale. Changing uses into each other More
        Background and ObjectiveLand use/cover changes (LU/LC) are considered as one of the most important issues in natural resource management, sustainable development and the environmental changes on a local, national, regional and global scale. Changing uses into each other and changing permissible uses into impermissible uses such as changing agricultural lands into residential regions or changing rangelands into eroded and low-yielding dry farming lands are always considered as importand issues in natural resources. Detection of the patterns of the land use changes and prediction of the changes in the future to carry out suitable planning for optimal utilization of uses in natural resource management reveal the need for modeling spatial and temporal changes of LU/LC. This study aims to assess the efficiency of the integrated model of Markov chain automatic cell (CA-Markov model) in simulation and prediction of spatial and temporal changes of Land use/Land cover (LU/LC) in Gorgan-rud river basin by applying three-dimensional Pentius-Melinus analysis in calibration of land use changes by using three assessment indices of Quantity Disagreement, Allocation Disagreement and Figure of Merit as new indices in the assessment of the accuracy of CA-Markov model. Materials and Methods In this research, the Earth observing sensor images of Landsat-5 Thematic Mapper (TM) and Landsat-8 Operational Land Imager (OLI) acquired from the U.S. geographical site dependent on the U.S. Geographical Survey (USGS) were used to predict land use changes by using the integrated model of Markov chain automatic cell in Gorgan-rud river basin. Seven land use classes were separated for Gorgan-rud river basin including forest land class with the use code 1, agricultural land class with the use code 2, rangeland class (a mixture of shrubbery,langeland,agriculture) with the use code 3, water bodies class with the use code 4, barren land class (barren, rangeland, agriculture) with the use code 5, residential and industrial region class with the use code 6, streambed class with the use code 7. In this study, object-oriented classification method and  Support Vector Machine (SVM) algorithm were used to classify Landsat 5 and 8 satellite images and extract the land use classes of Gorgan-rud river basin. Segmentation scale  in this algorithm on a 50 unit scale (SL 50) was selected to classify the satellite images of 1987, 2000, 2009 and 2017. The assessment of the accuracy of Support Vector Machine algorithm in the object-based classification of satellite images was done by representing overall accuracy, Kappa cefficient, user accuracy, producer accuracy, commission error and omission error for four study periods. To understand how the changes in the region were created during the period of the study three decades and which classes had the area expansion and which classes had the area decrease, changes in the limits of the classes were revealed and percent of the changes in each class were obtained by using the classification maps and IDRISI software. CA-Markov model predicts the changes of different groups of LU/LC units based on spatial neighbourhood concept, transition probability matrix. Preparing land suitability maps is necessary to predict land use changes so that spatial changes can be controlled for each use by probability rules via filtering suitability maps. Validation of Markov model was performed by using three-dimensional Pentius-Melinus analysis with three assessment indices of Figure of Merit, Quantity Disagreement and Allocation Disagreement. Results and Discussion Support Vector Machine algorithm in the classification of the land use based on object-oriented showed that the highest rate of commission error and omission error were observed in rangelands and agricultural lands with 19.12 and 18.55 percent respectively in the land use map of the year 2009. The lowest accuracy of the producer with 71.49 percent belongs to the rangeland use class in the land use map of the year 2009 and the lowest use accuracy with 71.45 percent belongs to agricultural land use class in the land use map of the year 2017. In keeping with the obtained results, the highest positive change belongs to the agricultural land use increase and the highest negative changes belong to rangeland and forest land use decrease during the period of three decades from 1987 to 2017. The highest forest land decrease with 4.8 percent, the highest agricultural land increase with 5.3 percent, the highest rangeland decrease with 9 percent, the highest barren land increase with 4.6 percent and the highest residential and industrial land increase with 0.8 happened during the periods of 2000-2017, 1987-2017, 2009-2017, 2009-2017, and 1987-2017 respectively. After validating the predicted land use chnges in CA-Markov model, based on the analysis of the 5 existing states in three-dimensional Pentius-Melinus analysis, the CA-Markov model with the accurate prediction of simulation of 89.92 percent showed the high efficiency of CA-Markov model in simulation process. After the implementation of the CA-Markov model analysis on the obtained land use map from the classification of the satellite images, one transition probability matrix and one transitioned area matrix were created. In predictions made by using CA-Markov model in 2017 to 2033, the most changes relate to barren and forest land expansion decrease to 16966 and 6961 hectare respectively and in contrast to the use decrease, rangeland, residential and agricultural land expansion increase will be observed to 20397, 3913 and 3825 hectare respectively. Conclusion Detecting land use changes by using LCM tool for the period of three decades 1987-2017 in Gorgan-rud river basin showed that the forest, agricultural and residential use has had significant changes in this region. The obtained results of the prediction of the land use changes during the coming eighteen years by using the integrated model of Markov chain automatic cell following the detected changes by LCM tool show that we will face extreme deforestation phenomenon in this area. Investigation of the obtained results from the implementation of the future use network model by using Markov transition estimator showed that the future use changes can be predicted based on the existing environmental conditions showing that the agriculture will extremely increase in Gorgan-rud river basin during the coming eighteen years. Thus we can protect water and soil resources with comprehensive and long-term management and prevent the degradation of these valuable resources. Three indices of Quantity Disagreement, Allocation Disagreement and Figure of Merit in three-dimensional Pentius-Melinus analysis had an important role in representation of the accuracy rate and calibration of the land use classification and the land use prediction corresponding with the obtained results from the carried out studies concerning the accuracy assessment with indices of Quantity Disagreement, Allocation Disagreement and Figure of Merit. The results of the studied land use changes by using LCM tool and the integrated model of Markov chain automatic cell during the period of 1987 to 2035 show the degradation of more than 24309 hectare of the forest lands and agriculture increase in an area about 62421 hectare indicating human interfernces and deforestation we face in this area. Manuscript profile
      • Open Access Article

        25 - Efficiency of mangrove indices in mapping some mangrove forests using Landsat 8 imagery in southern Iran
        Yousef Erfanifard Mohsen Lotfi Nasirabad
        Background and Objective Mangrove forests are one of the important plant ecosystems established across the intertidal zones and consist of evergreen species. According to Food and Agriculture Organization (FAO) reports, the area of world mangrove forests is almost 14.6 More
        Background and Objective Mangrove forests are one of the important plant ecosystems established across the intertidal zones and consist of evergreen species. According to Food and Agriculture Organization (FAO) reports, the area of world mangrove forests is almost 14.6 million ha and more than 40% of them are located in Asia. Indonesia has the largest mangrove forests with 2.3 million ha with the highest richness. Moreover, Iran with approximately 10,000 ha of mangrove forests in northern parts of the Persian Gulf and Oman Sea is one of the countries with mangrove ecosystems. The ecological and socio-economic importance of mangrove forests is evident to researchers and managers, however, an annual quantitative and qualitative decrease in these forests happens due to natural (e.g., storm) and anthropogenic (e.g., overexploitation) factors. Therefore, it seems essential to develop a practical approach in order to protect the present sites and improve the management, monitoring, and assessment of mangrove forests. The first step in every management and conservation plan in mangrove forests is mapping their spatial distribution and monitoring the spatial changes. It is important to find efficient methods for mensuration and assessment of temporal and spatial changes of mangrove forests for their efficient management and conservation. Field measurement difficulties in these ecosystems result in the rapid development of remote sensing data in mangrove mapping. However, previous studies have shown that common vegetation indices are not efficient in mangrove classification because of the high greenness and moisture content of leaves. Assessing the spectral signature of mangrove forests, researchers have designed specific indices for mangrove classification on satellite imagery. Since the mangrove indices have been recently developed, their efficiency in similar conditions has not been investigated, while they have been compared to some vegetation indices or individually investigated in case studies. Additionally, the mangrove indices have not been applied in mapping mangrove forests of southern Iran. Therefore, the aim of this study was a comparison of eight mangrove indices in mapping mangrove forests of Nayband Gulf (Bushehr province), Sirik (Hormozgan province), and Govatr Gulf (Sistan-Baluchestan province) on Landsat 8 imagery.  Materials and Methods Previous studies have shown that mangrove forests in Iran are distributed in 21 sites in 10 cities in Bushehr, Hormozgand, and Sistan-Baluchestan provinces. In order to assess the mangrove indices, a region was selected in each province. Mangroves in Nayband Gulf are concentrated in Bidkhun and Basatin Creeks. In Sirik, mangroves are located in the Azini wetland, and in Govatr Gulf, they are established in Baho and Govatr Creeks. Low- and high-tide Landsat imagery of each study area related to 2020 was downloaded. After pre-processing, the images were used to compute MI (Mangrove Index), NDMI (Normalized Difference Mangrove Index), CMRI (Combined Mangrove Recognition Index), MDI (Mangrove Discrimination Index), MMRI (Modular Mangrove Recognition Index), L8MI (Landsat 8 Mangrove Index), and MVI (Mangrove Vegetation Index). Moreover, low- and high-tide images were implemented in making SMRI (Submerged Mangrove Recognition Index). The classification of soil, water, and mangrove was performed by a support vector machine (SVM) algorithm. In addition to common accuracy criteria (i.e., overall accuracy, Kappa coefficient, mangrove producer's and user's accuracies), the results were evaluated by area under the curve (AUC) of receiver operating characteristic (ROC).Results and Discussion The efficiency of 10 mangrove indices was evaluated in similar conditions. The number of selected indices was eight; however, two of them (i.e., L8MI, MDI) were calculated two times, once with SWIR1 and once with SWIR2, and in total, 10 mangrove indices were used in three regions to classify mangrove forests. Between the indices, SMRI was selected as the most efficient mangrove index. One of the likely reasons for the efficiency of the index can be the application of low- and high-tide imagery to detect mangroves. In addition to PAmangrove and UAmangrove, the overall accuracy and kappa coefficient of soil, water, and mangrove of SMRI were more than other indices. The results of MDI and L8MI showed that they were more efficient with SWIR2 in Nayband Gulf. One of the reasons that likely caused the result can be urban areas and non-mangrove vegetation cover in Nayband Gulf. However, both indices were more accurate in mangrove discrimination when calculated with SWIR1 in Govatr Gulf. Investigation of AUC values proved that SMRI was the most efficient index between all studied indices in mangrove mapping within three study areas. The AUC of mangroves in Nayband Gulf, Sirik, and Govatr Gulf were 0.94, 0.92, and 0.93, respectively. The area of mangrove forests was estimated in Nayband Gulf (260.1 ha), Sirik (1049.2 ha), and Govatr Gulf (649.5 ha) using SMRI.Conclusion In general, the results showed that all mangrove indices were reliable in mangrove discrimination in three study areas and no weak results were achieved. The AUC values of mangroves using SMRI were more than 0.9 in three regions and the index was known as the most reliable index in all regions. The outcome in the study areas revealed that the efficiency of mangrove indices was less in Nayband Gulf compared to two other regions (The AUC of 0.6 for NDMI and L8MI-1). The area of mangrove forests in Nayband Gulf, Sirik, and Govatr Gulf was estimated on Landsat 8 imagery of 2020. The results indicated that between the study sites Sirik (1049.2 ha) and Basatin Creek (43.3 ha) had the highest and the lowest area covered by mangroves. It is suggested to use SMRI in other mangrove forests in southern Iran to approve the achievements of the present study. Manuscript profile
      • Open Access Article

        26 - Modeling of Aboveground Carbon stock using Sentinel -1, 2 satellite Imagery and Parametric and Nonparametric Relationships (Case Study: District 3 of Sangdeh Forests)
        Seyed Mahdi Rezaei Sangdehi Asghar Fallah Homan Latifi Nastaran Nazariani
        In this study, the goal is; Find suitable statistical and experimental models for estimating ground carbon storage by combining spectral and radar data from Sentinel 1, 2. There are 150 random circular samples with an area of 10 acres and a total of 150 samples. With gl More
        In this study, the goal is; Find suitable statistical and experimental models for estimating ground carbon storage by combining spectral and radar data from Sentinel 1, 2. There are 150 random circular samples with an area of 10 acres and a total of 150 samples. With global coverage, all height classes were selected. Species of species type, the total height of trees, and diameter equal to the chest of trees with more than 7.5 cm were recorded in each sample plot. After that, the amount of biomass at the surface of the sample parts was calculated based on the FAO global model and the amount of carbon storage on the ground by applying a coefficient. Radar and spectral images were subjected to various preprocessing operations and necessary processing. Then, the numerical values corresponding to the ground sample plots were extracted from the spectral bands and considered as independent variables. Modeling was performed by non-parametric methods of RF, SVM, kNN, and parametric methods of multiple linear regressions. The results showed that the average ground biomass was 469.07 tons per hectare and carbon storage was 234.53 tons per hectare. Also, the highest correlation was obtained between the main and artificial bands with the two characteristics related to the near-infrared band. The results of modeling validation showed the combination of optical and radar data of Sentinel 1, 2 satellites with biomass and surface carbon storage; Random forest method with the RMSE%, and percentage of bias. The studied characteristics (32.79, -2.24) and (30.79 and 0.01), respectively, have had a better performance in modeling. In general, the results obtained from the validation showed that in estimating the two characteristics the RF method showed better results if the Sentinel 1, 2 data were combined, and in contrast to the SVM. Manuscript profile
      • Open Access Article

        27 - Comparison of the effectiveness of machine learning methods in modeling fire-prone areas (Ilam Province, Darehshahr City)
        maryam mohammadian Maryam Morovati Reza Omidipour
        Fire is one of the most important natural hazards that has a great impact on the structure and dynamics of natural ecosystems. Due to Iran's location in the arid and semi-arid belt of the world, a large number of human-made and natural fires occur in different regions o More
        Fire is one of the most important natural hazards that has a great impact on the structure and dynamics of natural ecosystems. Due to Iran's location in the arid and semi-arid belt of the world, a large number of human-made and natural fires occur in different regions of the country every year. Therefore, determining sensitive areas to fire occurrence plays an important role in fire management in natural resources. To do so, the current study aims to identify fire-prone areas in Dere Shahr city in Ilam province using two machine learning of random forest (RF) and support vector machine (SVM) and 2024 fire occurrence points. Environmental factors were prepared in categories including topographical factors (altitude, slope direction, slope anlgle), climatic factors (rainfall, relative humidity, wind, temperature), biological factors (vegetation and soil moisture) and man-made factors (distance from residential areas, distance from road, distance from agricultural land, distance from river). The model’s accuracy was evaluated using the area under the curve (AUC) in the ROC curve and cross-validation statistics. Examining the AUC index showed that both models had good accuracy, although the RF model (AUC = 0.97) had higher accuracy than the support vector machine model (AUC = 0.86). According to the results of RF model, about 60% are in the low-risk class and about 20% are in the high fire risk class. Investigating the contribution of the factors affecting the occurrence of fire showed that man-made factors (distance from residential areas) and climatic factors (temperature) played a more important role in areas with a history of fire. Therefore, increasing public culture and reducing dangerous behaviors in nature can reduce the occurrence of fire in this area and contribute greatly to the protection of the environment and preservation of natural resources. Manuscript profile
      • Open Access Article

        28 - Comparing artificial neural network, support vector machine and object-based methods in preparation land use/cover mapsusing landSat-8 images
        Farnoush Aslami Ardavan Ghorbani Behrouz Sobhani Mohsen Panahandeh
        Preparing the maps of land use/cover for spatial planning and management is essential. Nowadays, satellite images and remote sensing techniques have widespread applications according to their capabilities to produce the updated data and analyze the images in all discipl More
        Preparing the maps of land use/cover for spatial planning and management is essential. Nowadays, satellite images and remote sensing techniques have widespread applications according to their capabilities to produce the updated data and analyze the images in all disciplines such as agriculture and natural resources. In the present study, Artificial Neural Network, Support Vector Machines and Object-Based techniques wereutilized for drawing the land use and vegetation maps in Ardabil, Namin, and Nir counties. The images of LandSat-8 Operational Land Imager (OLI) (2013) were usedafter geometric correction and topographic normalization and classified into 9 land use/cover classes including water bodies, irrigated farming, rainfed farming, meadows, outcrops, forests, rangelands, residential and airport areas. After the accuracy assessment, overall accuracy for the produced maps of ANN, Support Vector Machine (SVM) and Object-based (OB) techniques was estimated as 89.91, 85.68 and 94.37%, respectively and Kappa's coefficients were 0.88, 0.82 and 0.93, respectivelyindicating that the object-based method in comparison with two other methods has more advantages;on the other hand, all three methods could provide the desirable accuracy for the land use/covermaps. Overally, three advanced classification methods were examined in the heterogeneous area with elevation changes up to 3600m using the images of new lunched Landsat 8 and the most appropriate land use/cover mapping method was introduced. Manuscript profile
      • Open Access Article

        29 - Evaluating non-parametric supervised classification algorithms in land cover map using LandSat-8 Images
        Vahid Mirzaei Zadeh Maryam Niknejad Jafar Oladi Qadikolaei
        The aim of this study was to evaluate the efficiency of three support vector machine algorithms, fuzzy decision trees and neural networks for mapping land vegetation map of Arakvaz watershed using OLI sensor of Landsat images (2014). Geometric correction and image pre-p More
        The aim of this study was to evaluate the efficiency of three support vector machine algorithms, fuzzy decision trees and neural networks for mapping land vegetation map of Arakvaz watershed using OLI sensor of Landsat images (2014). Geometric correction and image pre-processing were utilized to determine the training samples of land vegetation classes for the classification operations. Sample resolution in the vegetation classes has been evaluated using a statistical divergence index. On the next stage, to evaluate the accuracy of algorithms' classification results, ground truth map with the dimensions of 550 m was designed using systematic approach and land vegetation types in the sampling plots were determined. Finally, the efficiency of each classification methodwas investigated bysuch criteria as overall accuracy, kappa coefficient, producer accuracy and user accuracy.Comparing the accuracy and kappa coefficient obtained for three categories with a proper band set in comparison with the ground truth map indicates that the Support Vector Machine (SVM) classifier with overall accuracy of 91.26%  and kappa coefficient of 0.8731 has had more appropriate results than other algorithms. The results showed that the separation and classification of forest landswith high accuracy have beenperformedas compared to the other land use classes. Manuscript profile
      • Open Access Article

        30 - Parkinson’s disease detection using EEG signals analysis based on Walsh Hadamard transform
        Yasamin Ezazi Peyvand Ghaderyan
        Background: Parkinson's disease (PD) is one of the most important diseases of the nervous system that occurs due to the degeneration of dopaminergic neurons in the substantia nigra. Because of increasing prevalence rate, lack of specific treatment, and aggravation sympt More
        Background: Parkinson's disease (PD) is one of the most important diseases of the nervous system that occurs due to the degeneration of dopaminergic neurons in the substantia nigra. Because of increasing prevalence rate, lack of specific treatment, and aggravation symptoms over time, PD detection is very important for the optimal control of patients' life. Therefore, the development of non-invasive, low-cost and reliable clinical diagnostic methods play an essential role to help doctors in diagnosis, slowing progressions of the disease and providing better control strategies to improve the quality of patients' life. Among diagnostic methods, recording and analyzing the electroencephalogram (EEG) signal as a low-cost and non-invasive approach has attracted a lot of attention.Method: EEG signal analysis in the time domain contains important information, but does not include the frequency information. Hence, this study is based on extracting new frequency features from the EEG signal using Walsh-Hadamard transform (WHT). WHT converts the signal from the time domain into the frequency domain and decompose it into orthogonal and rectangular waves. In this method, after calculating the Walsh coefficients, a set of features such as entropy, impulsive metrics, basic and high-order statistical features have been extracted from these coefficients. Subsequently, the discriminating capability of the presented method has been assessed using two classifiers namely support vector machine and k-nearest neighbor to classify PD patients from the healthy group.Results: The proposed method has been evaluated using the EEG signals of 28 healthy individuals and 28 patients with PD in two medication states (ON and OFF) during the reinforcement learning task. The obtained results have shown that this method is able to detect PD by using the entropy feature, support vector machine, and k nearest neighbor with acceptable accuracy of 99.95% and 99.98%, respectively. The good performance of entropy feature in comparison of other ones can be attributed to non-linear and non-stationary nature of EEG signal.Conclusion: In this study, a non-invasive, low-cost, and reliable method for PD detection using EEG signal analysis has been proposed. This algorithm is a multi-stage technique with a feature extraction approach based on WHT, entropy feature, and support vector machine and k-nearest neighbor classifiers. The reported results indicate that this method is effective in PD detection while being simple and easy, as well as being robust to the clinical factor of medication status. Manuscript profile
      • Open Access Article

        31 - Differential Information Extraction of Electroencephalogram Signals for Obsessive-Compulsive Disorder Detection
        Farzaneh Manzari Peyvand Ghaderyan
        Introduction: Obsessive-Compulsive Disorder (OCD) is a chronic mental and social disease that is prevalent in about 2 to 3% of the human population leading to cognitive impairments and affected quality of patient's life. Therefore, a reliable and timely diagnosis can he More
        Introduction: Obsessive-Compulsive Disorder (OCD) is a chronic mental and social disease that is prevalent in about 2 to 3% of the human population leading to cognitive impairments and affected quality of patient's life. Therefore, a reliable and timely diagnosis can help psychiatrists in better treating or controlling this disease.Method: Previous studies have demonstrated interdependence impairments between different brain regions in patients with OCD. Hence, this study has provided a new approach based on the decomposition of signals into intrinsic components and extraction of differential transient changes in amplitude envelope and phase spectra of the EEG signal recorded during Flanker tasks. The proposed algorithm has been evaluated using 19 healthy subjects and 11 patients by the Support Vector Machine (SVM) classifier.Result: The obtained results have confirmed the capability of the proposed method in diagnosing the disease with high accuracy of 93.89% using amplitude differential information of the electroencephalogram signal.Conclusion: In comparison between different regions, the statistical features extracted from the frontal lobe, the frontal-parietal network, and the inter-hemispheric features have offered better detection ability. Manuscript profile
      • Open Access Article

        32 - Comparison of Linear and Non-linear Support Vector Machine Method with Linear Regression for Short-term Prediction of Queue Length Parameter and Arrival Volume of Intersection Approach for Adaptive Control of Individual Traffic Lights
        mohammad ali kooshan moghadam Mehdi Fallah Tafti
        IntroductionThis study was carried out in line with the development of adaptive traffic signal control systems to provide a better traffic control at intersections. In this approach, if the predicted data related to the future cycles are used to optimize the upcoming sc More
        IntroductionThis study was carried out in line with the development of adaptive traffic signal control systems to provide a better traffic control at intersections. In this approach, if the predicted data related to the future cycles are used to optimize the upcoming schedule, it will control the traffic in unforeseen cases and manage it before reaching the forthcoming cycles. In order to have enough data to create such a model, the required data from two intersections in Yazd city were collected and these intersections were simulated using AIMSUN software. Then these intersections were calibrated and validated for existing conditions. The prediction accuracy results were extracted by the proposed methods and compared with the linear regression method. RMSE, MAE and GEH errors were used to compare the methods.Method: The predicted queue length and arrival volume parameters for any entry approach of itersections are major variables required during the adaptive signal control process,  Hence, Linear and Non-linear Support Vector Regression Methods combined with the time series method were used to predict these parameters. For comparison of the performance of these models with a conventional model, Linear Regression models were also developed for the prediction of these parameters.ResultsFor the developed model based on combined Linear Support Vector Regression and the time series methods, the number of optimal previous cycle data used in the model was measured as 6 and 2 previous data cycles for predicting the arrival volume at Pajuhesh and Seyed Hassan Nasrollah intersections, respectively. The optimal number of previous data used in the model was measured as 9 and 11 previous data cycles for predicting the queue length at Pajuhesh and Seyed Hassan Nasrollah intersections, respectively. Also, using the combined Non-Linear Support Vector Regression and the time series methods, the number of optimal previous data cycles was obtained as 8 and 2 cycles in predicting the arrival volume at Pajuhesh and Seyed Hassan Nasrollah intersections, and the number of optimal previous data cycles was obtained as 7 and 7 cycles in predicting the queue length at Pajuhesh and Seyed Hassan Nasrollah intersections.Discussion: The results of RMSE, MAE and GEH measures were used to compare the performance of the developed models with the real data. This comparison indicated that the model based on the combined Non-Linear Support Vector Regression and time series methods, has produced the best performance in predicting traffic arrival volume than the other aforementioned models. However, in terms of predicting the queue length, this model produced a better performance than the combined Linear Support Vector Regression at only one of the intersections. The Linear Regression model produced the weakest performance in all comparisons. Thus, it can be concluded that the combined Support Vector Regression and time series methods are appropriate tools in predicting traffic parameters in these situations. Manuscript profile
      • Open Access Article

        33 - The opinion mining of Digikala reviews by semi-supervised support vector machine
        zohre Karimi Hadis Haghiri
        Introduction: The widespread use of the internet and social media platforms has led to an explosion of digital data, including users' opinions about various services and products. These opinions are valuable sources of information for businesses and organizations to und More
        Introduction: The widespread use of the internet and social media platforms has led to an explosion of digital data, including users' opinions about various services and products. These opinions are valuable sources of information for businesses and organizations to understand the needs and preferences of their customers. Supervised machine learning models have been proven to be effective in analyzing users' opinions. However, to achieve efficient results, a sufficient amount of labeled training data is necessary. Labeling data requires a considerable amount of time and resources, which can be a significant challenge for many organizations. This is where the concept of semi-supervised learning comes in, which utilizes both labeled and unlabeled data to improve the performance of the model.Method: In this paper, a semi-supervised approach to analyze users' Persian opinions has been proposed. The method takes advantage of the abundant unlabeled data available in addition to a small number of labeled data in the training phase. The proposed method uses the support vector machine (SVM) algorithm, which has been shown to be effective in opinion mining in related research. The proposed method extracts emotional words from comments using sentiment lexicons and then extracts term frequency-inverse of document frequency vectors. The semi-supervised SVM algorithm is then applied to these vectors to estimate the polarity of sentiments.Results: To evaluate the performance of the proposed method, it has been tested on the Digikala comments dataset and compared with the supervised SVM algorithm and semi-supervised self-training method for different numbers of labeled data based on accuracy, precision, recall, and F1 criteria. The results indicate that the proposed semi-supervised method outperforms the supervised SVM algorithm and the semi-supervised method of self-training. The impact of the size of unlabeled data is also investigated in the experiments.Discussion: One of the advantages of the proposed method is that it can estimate the polarity of opinions that have not been trained in the training phase, which is not possible in some graph-based methods. Furthermore, it is not affected by the error of training with labeled data in self-training methods. In conclusion, the proposed semi-supervised method provides an efficient solution for analyzing users' opinions in Persian. This method can be used by businesses and organizations to gain insights into their customers' opinions and improve their products and services accordingly. Manuscript profile
      • Open Access Article

        34 - A new Approach to Detecting Intrusion and Malicious Behaviors in Big Data
        Homa Movahednejad Mohsen Porshaban Ehsan Yazdani.Chamzini Elahe Hemati Ashani Mahdi Sharifi
        Today, maintaining information security and intrusion detection is very important to deal with malicious behaviors in massive data. In this article, a hybrid method for detecting malicious data is presented wherein three factors of time progress, history of users and sc More
        Today, maintaining information security and intrusion detection is very important to deal with malicious behaviors in massive data. In this article, a hybrid method for detecting malicious data is presented wherein three factors of time progress, history of users and scalability are taken into account. The proposed method utilizes storage and feature extraction techniques to increase the speed and reduce the amount of calculations. In addition, the support vector machine algorithm has been modified for classification, and the parallelized bacterial foraging optimization algorithm has been used for feature extraction. The results show that the proposed algorithm outperforms the existing methods in terms of detection rate by 21%, false positive rate by 62%, accuracy by 15% and execution time by 70%. The reduction in execution time indicates that less energy is needed to run the algorithm which results in saving energy and can be beneficial for use in green energy systems. Manuscript profile
      • Open Access Article

        35 - خوشه‌بندی مبتنی بر ماشین بردار پشتیبان دوقلو به منظور انتخاب ویژگی در مساله دسته‌بندی داده‌های ریزآرایه
        سید محمد حسین معطر نفیسه سلیمانی
      • Open Access Article

        36 - Designing a multi agent credit scoring System applying ensemble learning
        Ahmad Ghodselahi Hamidreza Naji Ashakan Amir madhi
      • Open Access Article

        37 - Using Data Mining to Predict Bank Customers Churn
        parvin najmi abbas rad maryam shoar
        The intensity of finding competition in the industrial and economic space and the market move towards a complete competition market has made the inclination of firms to attract more customers and, instead, have increased the tendency to operate in various service and ma More
        The intensity of finding competition in the industrial and economic space and the market move towards a complete competition market has made the inclination of firms to attract more customers and, instead, have increased the tendency to operate in various service and manufacturing areas. This policy, which is known for increasing the share of wallet, makes it more important to maintain customer relationships and analyze their relationships, and it is necessary to conduct customer behavioral analysis, customer relationship analysis, and customer behavior forecasting. The present research seeks to identify customers who are turning away and anticipates the decline of customers in order to prevent customers from falling. In this regard, the variables associated with the reversal analysis are first identified and then the bank customers are clustered using a neural network and classified into three categories of loyal, regular, and negative clients. With the receipt of the above labels, a backup vector machine has been used to classify and reverse prediction. Based on the results, the proposed method has the ability to predict rotational deviation of up to 80% and, moreover, has a better performance than the classical decision tree. Manuscript profile
      • Open Access Article

        38 - Explaining the categories of support vector machine and neural network for Ranking of bank branches
        davod khosroanjom mohamamd elyasi behzad keshanchi Bahare Boobanian shovana abdollahi
        There is a lot of information in the banking industry that is of particular importance in identifying it. The use of data mining techniques not only improves quality but also leads to competitive advantages and market positioning. By using data mining and in order to an More
        There is a lot of information in the banking industry that is of particular importance in identifying it. The use of data mining techniques not only improves quality but also leads to competitive advantages and market positioning. By using data mining and in order to analyze patterns and trends, banks can predict the accuracy of how bank branches are ranked. In this paper, the branches of one of the large commercial banks (number of selected branches 1825 branches and the number of features used 57 features) were performed on real data using support vector machine categories and multi layer perceptron neural network. The evaluation results related to the support vector machine showed that this classifier has lower efficiency for the proposed method. However, the use of neural networks and its combination with PCA showed that it has high performance criteria. Values related to efficiency and accuracy were obtained using neural network with very high accuracy. Manuscript profile
      • Open Access Article

        39 - Assessment of Adaptive neural fuzzy inference systems and support vector regression in runoff estimation(A case study:Dez Basin)
        Ghazaleh Ahmadian Ahmadabad Mahmoud Zakeri Niri Saber Moazami Goudarzi
        Estimation of discharge flow in basin due to impact on water resource management can have an important economic role.In this research several computationals intelligence techniques suchas:ANN,SVR and ANFIS have been used to prediction the runoff dez basin.correlation be More
        Estimation of discharge flow in basin due to impact on water resource management can have an important economic role.In this research several computationals intelligence techniques suchas:ANN,SVR and ANFIS have been used to prediction the runoff dez basin.correlation between stations was investigated and stations of kamandan,zoorabad and daretakht were eliminated due to small correlation with around stations.then due to lack of human intervention with using xlstat software were evaluated  trend of stations and were selected stations without trend.Inorder to evaluate the performance of  models were used correlation,RMSE and NSE.Results of this research showed that ANFISwith clustering approach gives better estimation than grid partitioning approach.ANN, ANFIS and SVR have agood ability to simulate the flow of dez basin. Manuscript profile
      • Open Access Article

        40 - Modelling and Predicting Earnings Quality Using Decision Tree and Support Vector Machine
        Loghman Hatami Shirkouhi Soghra Barari Nokashti Maryam Ooshaksarae
        Earnings and its quality are one of the most important decision-making com-ponents for users. Therefore, earnings quality prediction is very important for investors and other stakeholders. To this aim, decision tree and support vec-tor machine (SVM) were used to predict More
        Earnings and its quality are one of the most important decision-making com-ponents for users. Therefore, earnings quality prediction is very important for investors and other stakeholders. To this aim, decision tree and support vec-tor machine (SVM) were used to predict earnings quality. The statistical population of the study included companies listed in Tehran Stock Exchange from 2011 to 2021 for 10 years. After screening, 113 companies and 1130 observations were selected as statistical samples. In order to identify and predict earnings quality, indicators related to corporate governance (board independence, audit committee independence, organizational ownership), dividend policy, debt financing, and conservatism were considered as inde-pendent variables and discretionary accruals quality representing profit quali-ty index was considered as a dependent variable. Data analysis was done according to CRISP-DM data mining standards and implementation of four decision tree algorithms including CHAID, C5.0, C&R, QUEST, and SVM. As the results showed, board independence had the greatest effect on earn-ings profit quality. Considering the accuracy value for the created SVM, which is equal to 98.5%, it indicates the high capability of this method to predict earnings quality. Manuscript profile
      • Open Access Article

        41 - Assessing Credit Risk in the Banking System Using Data Mining Techniques
        Nima Hamta Mohammad Ehsanifar Bahareh Mohammadi
        A credit risk is the risk of default on a debt that may arise from a borrower failing to make required payments. The objective of this paper is recognition of the factors that effect on credit risk and presenting a model for prediction of credit risk and legal customer More
        A credit risk is the risk of default on a debt that may arise from a borrower failing to make required payments. The objective of this paper is recognition of the factors that effect on credit risk and presenting a model for prediction of credit risk and legal customer credit ranking that are applicant of Sepah bank facilities in Dezfool city and the method of Clustering, Neural Network and Supporter Vector Machine has been used in the current study. Accordingly necessary investigations have been done on financial and nonfinancial data by means of a simple random sample of 200 legal customers that were applicant of bank facilities. In the this paper, 27 descriptive variable that include financial and nonfinancial variables were investigated and finally available variables 8 effective variables on credit risk were selected by means of bank experts judges that were separated by data collection Clustering method in to some groups (Clusters) in the someway that data in one Cluster were considering other points in other Clusters had more similarity. Also selected variables with 3 layers perceptron Neural Network input vector entered the model and finally by means of Support Vector Machine was presented in order to bank legal customers’ financial operation prediction. The obtained results of Neural Network model and Supporter Machine indicate that Neural Network model has mire efficiency in legal customers’ credit risk prediction and credit ranking. Manuscript profile
      • Open Access Article

        42 - Presenting a Hybrid Model based on the Machine Learning for the Classification of Banking and Insurance Industry Common Customers
        Hamidreza Amirhassankhani Abbas Toloie Eshlaghy reza radfar Alireza pourebrahimi
        Global competition, dynamic markets, and rapidly shrinking innovation and technology cycles, all have imposed significant challenges on the financial, banking, and insurance industries and the need to data analysis for improving decision-making processes in these organi More
        Global competition, dynamic markets, and rapidly shrinking innovation and technology cycles, all have imposed significant challenges on the financial, banking, and insurance industries and the need to data analysis for improving decision-making processes in these organizations has become increasingly important. In this regard, the data stored in the databases of these organizations are considered as valuable sources of information and knowledge needed for organizational decisions. In the present research, the researchers focus on the common customers of the bank and insurance industry. The purpose is to provide a methodology to predict the performance of new customers based on the behavior of previous customers. To this end, a hybrid model based on support vector machine and genetic algorithm is used. The support vector machine is responsible for modeling the relationship between customer performance and their identity information and the genetic algorithm is responsible for tuning and optimizing the parameters of the support vector machine. The results obtained from customer classification using the proposed model in this research led to customer classification with a high accuracy of 99%. Manuscript profile
      • Open Access Article

        43 - Evaluation of soil loss rate in land uses of Nirchai watershed using RUSLE model and Landsat satellite images (OLI)
        mousa Abedini AmirHesam Pasban Behrouz Nezafat takle
        The purpose of this research is to evaluate the amount of soil loss in the land uses of the Nirchai watershed using the RUSLE model in Ardabil province. In order to carry out this research, first, the satellite image of the studied area related to the year 1400 and the More
        The purpose of this research is to evaluate the amount of soil loss in the land uses of the Nirchai watershed using the RUSLE model in Ardabil province. In order to carry out this research, first, the satellite image of the studied area related to the year 1400 and the month of June was received from the American Geological Research Center, and after atmospheric and radiometric corrections, a land use map was prepared using the supervised classification method using the support vector machine method. Then the RUSLE model was used to estimate the erosion rate. SPSS 21, Excel, ArcGIS 5.4, Archydro and ENVI 5.3 software were used to analyze and produce maps in this research. RUSLE model parameter layer includes rain erosion layer, soil layer, topography layer, vegetation layer and soil protection factor as well as various statistics related to rain gauge stations, hydrometry, topographic maps 1:50000, geology 1:100000 as well as DEM (20 meters area) and GIS geographic information system and remote sensing have been used. The results of this study showed that the average amount of annual soil erosion for the whole basin ranges from 0.5 to 14.25 tons per hectare per year. Also, the investigation of the regression relationships between the factors of RUSLE model and the amount of annual soil erosion showed that the topography factor (LS) with the highest value of the coefficient of determination R^2=0.93 is the most important in estimating the annual soil erosion using the RUSLE model. Manuscript profile
      • Open Access Article

        44 - Investigating the Effect of Land Use Change on Soil Erosion and Sediment Yield in Razeychay Watershed During Past 20 Years
        Mousa Abedini Farydeh Bahramnia Gojabeiglo Raoof Mostafazadeh AmirHesam Pasban
        Soil erosion is a global problem that threatens water and soil resources and land use change is one of the important factors in soil erosion intensification. The aim of this study was to evaluate the effect of land use change on soil erosion in Razeychay watershed of Me More
        Soil erosion is a global problem that threatens water and soil resources and land use change is one of the important factors in soil erosion intensification. The aim of this study was to evaluate the effect of land use change on soil erosion in Razeychay watershed of Meshginshahr located in Ardabil province. First, Landsat images of the study area in May 1999, and 2019 and were obtained from USGS website. In the image processing stage, atmospheric and radiometric corrections have been conducted, and then the land use maps of the study area has been prepared for study years using support vector machine (SVM) as a supervised classification method. Then, the RUSLE model was used to estimate the amount of erosion in the two time span. SPSS, Excel, Arc GIS 5.4, Archydro and ENVI 5.3 software were used to spatial analysis and data processing.The results showed that, rangeland, irrigated farming and bare lands have decreased during the last twenty years. While, the extent of dry farming and residential area have increased. Meanwhile, the highest change is related to dry farming (an increase of 27.69 hectares). According to the results of erosion modeling, the rate of erosion from 1999 to 2019 has decreased from 6.49 to 6.46 tons per hectare per year.             Manuscript profile
      • Open Access Article

        45 - پیش بینی وقوع بارش روزانه با استفاده از داده های هواشناسی روزهای قبل (مطالعه موردی: شهر اصفهان)
        Ghorban Mahtabi فرشید تاران سعید مظفری
        هدف از این تحقیق، پیش­بینی وقوع بارش روزانه شهر اصفهان با استفاده از داده­های هواشناسی 1 تا 7 روز قبل می­باشد. برای این منظور،داده­های هواشناسی دوره 2009-2000 با استفاده از مدل­های هوشمند بردار پشتیبان، k-نزدیک­ترین همسایگی، شبکه عصبی مصنوعی و در More
        هدف از این تحقیق، پیش­بینی وقوع بارش روزانه شهر اصفهان با استفاده از داده­های هواشناسی 1 تا 7 روز قبل می­باشد. برای این منظور،داده­های هواشناسی دوره 2009-2000 با استفاده از مدل­های هوشمند بردار پشتیبان، k-نزدیک­ترین همسایگی، شبکه عصبی مصنوعی و درخت تصمیم بررسیگردید. نتایج نشان داد که در هر چهار روش، دقت پیش­بینی بهترین سناریوها با استفاده از داده­های 6 و 7 روز قبل، کمتر از 75 درصد بود، اما با استفاده از داده­های روزهای 1 تا 5 روز قبل، بارش روزانه با دقت بیش از 80 درصد پیش­بینی شد. عملکرد روش درخت تصمیم بهتر از سه روش دیگر بود و به علت ارائه درخت تصمیم­گیری، نتایج سناریوهای 1 تا 5 روز قبل این روش ارائه شد. نتایج سناریوها با استفاده از داده­های 1 تا 3 روز قبل نشان داد که رطوبت نسبی هوا مناسب­ترین پارامتر برای پیش­بینی وقوع بارش روزانه است، اما در شرایط استفاده از داده­های 4 و 5 روز قبل، دمای هوا مناسب­ترین پارامتر برای انجام پیش­بینی بود. در نهایت عملکرد بهترین سناریوها با استفاده از داده­های دوره 2016-2010 صحت­سنجی گردید. بهترین نتایج در بخش صحت­سنجی به ترتیب مربوط به سناریوی 1 روز قبل(با پارامتر حداقل رطوبت نسبی) و سناریوی 4 روز قبل(با پارامتر دمای حداکثر) بود. Manuscript profile
      • Open Access Article

        46 - Evaluation and assessment of changes in forest area Harra (mangrove) Using remote sensing techniques Case Study: Bandar Abbas
        محمد علی زنگنه اسدی ابراهیم تقوی مقدم elahe akbari
        Knowledge of changes is first, most important action planners, and authority’s natural and human environment. Satellite images and satellite image processing techniques and methods very precise tool for navigation and assessment of changes in forest areas is the p More
        Knowledge of changes is first, most important action planners, and authority’s natural and human environment. Satellite images and satellite image processing techniques and methods very precise tool for navigation and assessment of changes in forest areas is the purpose of of this study is assess the changes in forest areas mangrove in Bandar using the technique of remote sensing. To achieve this purpose of we used the information and topographic maps, satellite images and the algorithm of maximum likelihood and minimum distance 1989, 2005 and 2015 years of area. The results show that the maximum likelihood method with 98/32% overall accuracy and kappa coefficient 0/978 accurate method than using support vector machine and the minimum distance for mapping land cover changes and monitoring changes in forest. According to calculations forest surface area’ of 76/09 sq km in 1989 has increased to 125/08 square kilometers in 2015. Which indicates the shores of the Strait of Hormuz is the hydrodynamic change. Thus adopting every environmental protection measures in the area is necessary, any facilities and infrastructure projects must comply with environmental considerations and ecological. Manuscript profile
      • Open Access Article

        47 - Comparison of different classification algorithms in Landsat OLI imagery to produce land use maps (Case study: Beheshte Gomshode region)
        mohammad kazemi ahmad nohegar Mirdad Mirdadi
        All necessary information from the remote sensing technology basics and accurate classification of satellite images. Land use mapping is one of the key factors in studies of environment and natural resources management. Mapping land use is often one of the most expensiv More
        All necessary information from the remote sensing technology basics and accurate classification of satellite images. Land use mapping is one of the key factors in studies of environment and natural resources management. Mapping land use is often one of the most expensive parts of natural resources and environmental projects. Satellite data is one of the fastest and most cost-effective methods for mapping land use that is available for researchers. In recent years, researchers from the different methods of classification algorithms land use maps have been produced using this data. This study investigated the ability of 8 common algorithms for land use mapping by Beheshte Gomshodeh in Fars province Using data from the Landsat OLI sensor is 2015. The results showed that the ML and SVM classification by 98.98 and 98.73% overall accuracy factor and 98.41 and 98.09% kappa coefficient is better than other methods, respectively. The accuracy of the order of priority 8 that is, Maximum likelihood, Support Vector Mashine, Mahalanobis distance, Spectral information divergence, Spectral angle mapper, Minimum distance from the mean, binary code and parallel piped. Method of maximum likelihood classification with98.83 was the highest confidence in level of 1 percent confidence interval. All the research results of this study can be using the correct classification. Land use maps can be extracted with higher accuracy. Manuscript profile
      • Open Access Article

        48 - Identifying Factors Affecting Non-curent Debts of Banks Using Neural Networks and Support Vector Machine Algorithm
        sajjad kordmanjiri iman dadashi zahra Khoshnood hamid reza gholamnia roshan
        The main purpose of this paper is to identify the factors influencing the creation and increase of non-current debts to make a more appropriate decision in granting facilities. For this purpose, to select effective variables, from the analysis algorithms of correlation More
        The main purpose of this paper is to identify the factors influencing the creation and increase of non-current debts to make a more appropriate decision in granting facilities. For this purpose, to select effective variables, from the analysis algorithms of correlation and Lasso components; And to classify the samples, neural networks and support machine were used. In this study, a sample of 660 legal customers of Sepah Bank for the years 2006-2017 was selected and focused on the characteristic variables extracted from the facility contracts of these customers along with financial, non-financial, auditing and economic variables. The results showed that the Lasso algorithm focused on financial, economic and auditing variables, performed better than the neighboring component analysis algorithm, and based on this algorithm, 10 key variables affecting non-current debts were identified. Due to the better performance of support vector machines with radial cores, its use in modeling non-current debts is recommended. Manuscript profile
      • Open Access Article

        49 - An Intelligent Method for Death Prediction Using Patient Age and Bleeding Volume on CT scan
        Yosra Azizi Nasrabadi Ali Jamali Nazari Hamid Ghadiri Farshid Babapour Mofrad
        The purpose of this paper's prediction of survival or death within 30 days is based on a cerebral hemorrhage. Timely and correct diagnosis and treatment of cerebral hemorrhage are essential. If the patient's death is predicted during these thirty days, the treating phys More
        The purpose of this paper's prediction of survival or death within 30 days is based on a cerebral hemorrhage. Timely and correct diagnosis and treatment of cerebral hemorrhage are essential. If the patient's death is predicted during these thirty days, the treating physician should use intensive care and more treatment for the patient. Cerebral hemorrhages require immediate treatment and rapid and accurate diagnosis. In this article, using the volume of cerebral hemorrhage and the patient's age and using the neural network of support vector machine (SVM), it is predicted what percentage of people with cerebral hemorrhage survive and what percentage die. Parameters of cerebral hemorrhage volume and, age of patients, neural network input are considered. The network's output is the survival or death of patients with cerebral hemorrhage over the next thirty days. The data we used included the bleeding volume and age of 66 patients with lobar hemorrhage, 76 patients with deep bleeding, nine patients with Pontine hemorrhage and 11 patients with cerebellar hemorrhage. All bleeding models are considered as input to the support vector machine neural network. The overall accuracy of the designed support vector machine neural network is 93%. Regardless of the type of cerebral hemorrhage, the survival or death of people with cerebral hemorrhage within 30 days is predicted. Manuscript profile
      • Open Access Article

        50 - Identification of Attention Deficit Hyperactivity Disorder Patients Using Wavelet-Based Features of EEG Signals
        Sahar Karimi Shahraki Mahdi Khezri
        Attention Deficit Hyperactivity Disorder (ADHD) is a neurological and psychiatric disorder which causes to attention deficit, anxiety, hyperactivity and impulsive behaviors. ADHD is more common in children and directly leads to their learning disability. The aim of this More
        Attention Deficit Hyperactivity Disorder (ADHD) is a neurological and psychiatric disorder which causes to attention deficit, anxiety, hyperactivity and impulsive behaviors. ADHD is more common in children and directly leads to their learning disability. The aim of this study was to accurately identify ADHD patients by using wavelet-based features of brain signals (EEG). Recorded EEG signals from 61 children with ADHD (diagnosed according to the DSM-IV criteria) and 60 healthy controls in the age range of 7-12 years were used to design the system. In the proposed method by applying wavelet transform, EEG signals were decomposed into subbands; for the time version of the signals in each subband, the temporal and statistical features were calculated. The reduced feature set by principal component analysis (PCA) method was then used to train the classification unit to identify ADHD patients from healthy individuals. To obtain the desired results, different types of wavelet functions and decomposition levels were investigated. The bior3.1 wavelet function with the support vector machine (SVM) classifier and the rbio1.1 wavelet function with the k-nearest neighbor (kNN) classifier presented the best performance with the recognition accuracy of 98.33% and 99.17%, respectively. The SVM classification method with radial basis kernel function (RBF) and the kNN method with the number of nearest neighbors, k = 3 obtained the best results.The results obtained in this study compared to the results reported in previous studies showed at least a 2% improvement in the recognition accuracy of ADHD patients. Manuscript profile
      • Open Access Article

        51 - Proposing an Automated System for Differentiating between Healthy Individuals and Patients with Diabetic Retinopathy
        Mina Ghayoor Hossein Pourghassem
        Diabetes is one of the most common diseases in the world, adversely affects different body organs. One of the most common causes of eye problems is diabetes. Analyzing retinal damage is one of the best ways to diagnose diabetes so one of the best ways to diagnose diabet More
        Diabetes is one of the most common diseases in the world, adversely affects different body organs. One of the most common causes of eye problems is diabetes. Analyzing retinal damage is one of the best ways to diagnose diabetes so one of the best ways to diagnose diabetes is to look at the damage to the retina. Hence, first, a highly applicable and effective method, which is a combination of the Wiener filter and the discrete wavelet transform (DWT), is used for the removal of noise from images. Afterward, the k-means clustering algorithm is used to remove the bad image sections including very light and very dark areas of the image. Next, the image color and shape features are extracted. We transfer the images to the lab space, which fits the eye more, to extract the image color features. To extract the image shape features, first the images are converted into grey images and then the shape features are extracted. After extracting the features, the number of features is reduced using the Principal Component Analysis (PCA) algorithm. Besides, the best and most effective features are also selected. Finally, the support vector machine classifier with different kernel is used to classify the features and images into two categories, namely the healthy participants and patients. The accuracy resulting from this algorithm using the test images is over 90%. Manuscript profile
      • Open Access Article

        52 - Long-Term Demand Forecasting in Electrical Energy Supply Chain of Espidan Ironstone Industry using Deep Learning and Extreme Learning Machine
        Sepehr Moalem Roya M. Ahari Ghazanfar Shahgholian Majid Moazzami Seyed Mohammad Kazemi
        Espidan ironstone industries is one of the most consumed power industries in the electricity supply chain of Isfahan province as the second industrial hub of the country and one of the main suppliers of raw materials in the supply chain of the country's steel industry. More
        Espidan ironstone industries is one of the most consumed power industries in the electricity supply chain of Isfahan province as the second industrial hub of the country and one of the main suppliers of raw materials in the supply chain of the country's steel industry. Planning in a large-scale electricity supply chain, in a space full of uncertainty, is begin with electricity demand forecasting.In this paper, a hybrid long-term demand forecasting method in the electricity supply chain of Isfahan's ironstone industries using a combined data mining method including wavelet transform,deep learning and intensive learning machine is proposed. The used data in this study is according to the recorded information from the electrical energy demand signal of Espidan ironstone industries in a period of 40 months in the form of 24-hours. The data in a part of the study period due to the lack of production of this industry in some hours are interrupted. So that only 40% of the data had a value and the remaining, 60% were zero. This subject led to information deficiencies and increases the forecasting error up to 40% in the first step of the proposed algorithm. By completing the first step of the proposed model with intense learning machine (ELM) the forecasting error is reduced and it was possible to create an improved forecasting model for supervised training. Finally, simulation results are compared with other available approaches such as support vector machine and decision tree. The results show the improvement and reduction of error and a significant increase in the accuracy of the proposed method in long-term demand forecasting in the electricity supply chain of Espidan ironstone industries. Manuscript profile
      • Open Access Article

        53 - Improved Intrusion Detection System Based On Distributed Self-Adaptive Genetic Algorithm to Solve Support Vector Machine in Form of Multi Kernel Learning with Auto Encoder
        Elaheh Faghihnia Seyed Reza Kamel Tabakh Farizni Maryam Kheirabadi
        Intrusion into systems through network infrastructure and the Internet is one of the security challenges facing the world of information and communication technology and can lead to the destruction of systems and access to data and information. In this paper, a support More
        Intrusion into systems through network infrastructure and the Internet is one of the security challenges facing the world of information and communication technology and can lead to the destruction of systems and access to data and information. In this paper, a support vector machine model with weighted and parameters of SVM kernels are presented to detect the intrusion. Due to the high complexity of this problem, conventional optimization methods are not able to solve it. Therefore, we propose a Distributed Self Adaptive Genetic Algorithm (DSAGA). On the other hand, due to the high volume of data in such issues, Auto encoder has been used to reduce data. The proposed approach is a hybrid method based on Auto encoder, improved Support Vector Machine and Distributed Self Adaptive Genetic Algorithm (DSAGA) that it is evaluated by its execution on DARPA data set. Manuscript profile
      • Open Access Article

        54 - Evaluation of Deep Neural Networks in Emotion Recognition Using Electroencephalography Signal Patterns
        Azin Kermanshahian Mahdi Khezri
        In this study, the design of a reliable detection system that is able to identify different emotions with the desired accuracy has been considered. To reach this goal, two different structures for the emotion recognition system include 1) using linear and non-linear fea More
        In this study, the design of a reliable detection system that is able to identify different emotions with the desired accuracy has been considered. To reach this goal, two different structures for the emotion recognition system include 1) using linear and non-linear features of the electroencephalography (EEG) signal along with common classifiers and 2) using EEG signal in a deep learning structure is considered to identify emotional states. To design the system, the EEG signals of the DEAP database which were recorded by displaying emotional videos from 32 subjects were used. After the preparation and noise removal, linear and non-linear features such as: Skewness, Kurtosis, Hjorth parameters, Lyapunov exponent, Shannon entropy, correlation and fractal dimension and time reversibility were extracted from the alpha, beta and gamma subbands of the EEG signals. Then according to structure 1, the features were applied as input to common classifiers such as decision tree (DT), k nearest neighbor (kNN) and support vector machine (SVM). Also in structure 2, the EEG signal was considered as the input of the convoloutional neural network (CNN). The goal is to evaluate the results of deep learning networks and other methods for emotion recognition. According to the obtained results, the SVM achieved the best performance for identifying four emotional states with 94.1 % accuracy. Also, the proposed CNN identified the desired emotional states with the accuracy of 86%. Deep learning methods are superior to simple classifiers because they do not require the features of the signals and are resistant to different noises. Using a short period of time for the signals and performing near optimal preprocessing and conditioning, can further improve the results of deep neural networks. Manuscript profile
      • Open Access Article

        55 - Brain Stroke Classification Based on Deep Learning Approach in Microwave Brain Imaging System
        Majid Roohi Jalil Mazloum Mohammad Ali Pourmina Behbod Ghalamkari
        One of the main reasons of death in the world, mostly affecting seniors, is brain stroke. Almost 85% of all brain strokes are ischemic due to internal bleeding in a part of the brain. Due the high mortality rate, quick diagnosic and treatment of ischemic and hemorrhagic More
        One of the main reasons of death in the world, mostly affecting seniors, is brain stroke. Almost 85% of all brain strokes are ischemic due to internal bleeding in a part of the brain. Due the high mortality rate, quick diagnosic and treatment of ischemic and hemorrhagic strokes are of utmost importance. In this paper, to realize microwave brain imaging system, a circular array-based of modified bowtie antennas located around the multilayer head phantom with a spherical target with radius of 1 cm as intracranial hemorrhage target aresimulated in CST simulator. To obtain satisfied radiation characteristics in the desired band (from 0.5-5 GHz) an appropriate matching medium is designed. First, in the processing section, a confocal image-reconstructing method based using delay and sum (DAS) and delay, multiply and sum (DMAS) beam-forming algorithms is used. The reconstructed images generated shows the usefulness of the proposed confocal method in detecting the spherical target in the range of 1 cm. The main purpose of this paper is stroke classification using deep learning approaches. For this, an image classification algorithm is developed to estimate the stroke type from reconstructed images. By using the proposed deep learning method, the reconstructed images are classified into different categories of cerebrovascular diseases using a multiclass linear support vector machine (SVM) trained with convol­uti­onal neural networks (CNN) features extracted from the images. The simulated results show the suitability of the proposed image reconstruction method for precisely localizing bleeding targets, with 89% accuracy in 9 seconds. In addition, the proposed deep-learning approach shows good performance in terms of classification, since the system does not confuse between different classes. Manuscript profile
      • Open Access Article

        56 - Evaluation of Surface Electromyogram Signal Decomposition Methods in the Design of Hand Movement Recognition System
        Maryam Karami Mahdi Khezri
        One method for determining motor commands to control hand prostheses is to use surface electr­omy­ogr­am (sEMG) signal patterns. Due to the random and non-stationary nature of the signal, the idea of using signal information in small time intervals was inves More
        One method for determining motor commands to control hand prostheses is to use surface electr­omy­ogr­am (sEMG) signal patterns. Due to the random and non-stationary nature of the signal, the idea of using signal information in small time intervals was investigated. In this study, with the aim of more accurate and faster detection of hand movements, two signal decomposition methods, namely discrete wavelet transform (DWT) and empirical mode decomposition (EMD) were evaluated. The sEMG sign­als of the Ninapro-DB1 dataset, which were extracted from 27 healthy subjects while performing hand and finger movements, were used to design the system. Simple time domain features with fast calculation capability were extracted for each subband of the decomposed signals. Also, support vector machine (SVM) using different kernel functions was applied as a classifier. The results show that the use of DWT and EMD methods with the ability to access the information of time and frequency sub-intervals of the signals, provides better results in identifying hand movements compared to previous studies. With the EMD method and eight intrinsic mode functions (IMF), the highest recognition accuracy of 83.3% was obtained for six movements. Also, the DWT with the Bior5.5 mother wavelet and five levels of decomposition, achieved 80% recognition accuracy for ten movements and with the Coif2 mother wavelet and six levels of decomposition, the accuracy was 83.33% for eight movements. The results show the better performance of the DWT decomposition method compared to EMD for the design of the hand movement recognition system using sEMG signal patterns. Manuscript profile
      • Open Access Article

        57 - Content-Based Medical Image Retrieval Based on Image Feature Projection in Relevance Feedback Level
        Mohammad Behnam Hossein Pourghasem
        The purpose of this study is to design a content-based medical image retrieval system and provide a new method to reduce semantic gap between visual features and semantic concepts. Generally performance of the retrieval systems based on only visual contents decrease bec More
        The purpose of this study is to design a content-based medical image retrieval system and provide a new method to reduce semantic gap between visual features and semantic concepts. Generally performance of the retrieval systems based on only visual contents decrease because these features often fail to describe the high level semantic concepts in user’s mind. In this paper this problem is solved using a new approach based on projection of relevant and irrelevant images in to a new space with low dimensionality and less overlapping in relevance feedback level. For this purpose, first we change the feature space using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) techniques and then classify the feedback images applying Support Vector Machine (SVM) classifier. The proposed framework has been evaluated on a database consisting of 10,000 medical X-ray images of 57 semantic classes. The obtained results show that the proposed approach significantly improves the accuracy of retrieval system. Manuscript profile
      • Open Access Article

        58 - Static Voltage Stability Analysis by Using SVM and Neural Network
        Mehdi Hajian Asghar Akbari Foroud Hossein Norouzian
        Voltage stability is an important problem in power system networks. In this paper, in terms of static voltage stability, and application of Neural Networks (NN) and Supported Vector Machine (SVM) for estimating of voltage stability margin (VSM) and predicting of voltage More
        Voltage stability is an important problem in power system networks. In this paper, in terms of static voltage stability, and application of Neural Networks (NN) and Supported Vector Machine (SVM) for estimating of voltage stability margin (VSM) and predicting of voltage collapse has been investigated. This paper considers voltage stability in power system in two parts. The first part calculates static voltage stability margin by Radial Basis Function Neural Network (RBFNN). The advantage of the used method is high accuracy in online detecting the VSM. Whereas the second one, voltage collapse analysis of power system is performed by Probabilistic Neural Network (PNN) and SVM. The obtained results in this paper indicate, that time and number of training samples of SVM, are less than NN. In this paper, a new model of training samples for detection system, using the normal distribution load curve at each load feeder, has been used. Voltage stability analysis is estimated by well-know L and VSM indexes. To demonstrate the validity of the proposed methods, IEEE 14 bus grid and the actual network of Yazd Province are used. Manuscript profile
      • Open Access Article

        59 - Wavelet Packet Entropy in Speaker-Independent Emotional State Detection from Speech Signal
        Mina Kadkhodaei Elyaderani Hamid Mahmoodian Ghazaal Sheikhi
        In this paper, wavelet packet entropy is proposed for speaker-independent emotion detection from speech. After pre-processing, wavelet packet decomposition using wavelet type db3 at level 4 is calculated and Shannon entropy in its nodes is calculated to be used as featu More
        In this paper, wavelet packet entropy is proposed for speaker-independent emotion detection from speech. After pre-processing, wavelet packet decomposition using wavelet type db3 at level 4 is calculated and Shannon entropy in its nodes is calculated to be used as feature. In addition, prosodic features such as first four formants, jitter or pitch deviation amplitude, and shimmer or energy variation amplitude besides MFCC features are applied to complete the feature vector. Then, Support Vector Machine (SVM) is used to classify the vectors in multi-class (all emotions) or two-class (each emotion versus normal state) format. 46 different utterances of a single sentence from Berlin Emotional Speech Dataset are selected. These are uttered by 10 speakers in sadness, happiness, fear, boredom, anger, and normal emotional state. Experimental results show that proposed features can improve emotional state detection accuracy in multi-class situation. Furthermore, adding to other features wavelet entropy coefficients increase the accuracy of two-class detection for anger, fear, and happiness. Manuscript profile
      • Open Access Article

        60 - Crack Detection in Structures Using Modal Strain Energy and Frequency
        SIyamak ghadimi seyyed sina kourehli
        In this paper a new method for crack detection in structures based on first three mode frequencies and modal strain energies using least square support vector machine has been proposed. Since the mode shape vectors are equivalent to nodal displacements of a vibrating st More
        In this paper a new method for crack detection in structures based on first three mode frequencies and modal strain energies using least square support vector machine has been proposed. Since the mode shape vectors are equivalent to nodal displacements of a vibrating structure, therefore in each element of the structure strain energy is stored. The strain energy of a structure due to mode shape vector are usually referred to as modal strain energy (MSE) and can be considered as a valuable parameter for crack identification. Also, change of natural frequencies is effective, inexpensive, and fast tool for non-destructive testing. So, the proposed method uses the first three natural frequencies and modal strain energies as the input parameters and crack states as output to train the least squares support vector machine model. Manuscript profile
      • Open Access Article

        61 - Development of a Wavelet Hybrid Models for Estimating Regional Droughts in Siminehroud Basin
        Erfan Rostam Zade alireza parvishi
        In the present study, the drought of Siminehroud basin was investigated by intelligent Support Vector Machine (SVM) models, Artificial Neural Network (ANN) and Wavelet theory (W). Data from six rain gauge stations in the region were used and drought index was calculated More
        In the present study, the drought of Siminehroud basin was investigated by intelligent Support Vector Machine (SVM) models, Artificial Neural Network (ANN) and Wavelet theory (W). Data from six rain gauge stations in the region were used and drought index was calculated in four time scales. The first-order autocorrelation was also selected as the optimal delay. Then the appropriate structure of the Artificial Neural Network was determined using Trial and Error Method and the three coefficients of the SVM model were determined and modeled. The results of evaluating individual models showed that there is no significant difference between two methods in predicting droughts. Then WANN and WSVM hybrid models were prepared. The results showed that the application of Wavelet theory greatly improved the performance of individual models and the amount of RMSE and MAE indices decreased by 19% and 21% and the correlation coefficient increased by 30%, respectively. Manuscript profile
      • Open Access Article

        62 - Modelling of drag reduction of silica nanofluid in single-phase flow of water through horizontal pipelines using support vector regression optimized by genetic algorithm and comparison between the model results and experimental data
        abdolmohamad ghaedi abdolrasul pouranfard nabiollah ramezani azam vafaei
      • Open Access Article

        63 - Improvement of Face Recognition Approach through Fuzzy-Based SVM
        Amir Hooshang Mazinan لیلا یار محمدی
      • Open Access Article

        64 - داده کاوی صورت‌های مالی جهت اعطای تسهیلات مالی
        امیر رضا کیقبادی وحید خدامی
      • Open Access Article

        65 - Predicting Emotional Tendency of Investors Using Support Vector Machine (SVM) and Decision Tree (DT) Techniques
        reza taghavi iman dadashi mohammad javad zare bahnamiri hasmidreza gholamnia roshan
        Investor's emotional tendencies indicate the margin of shareholder's optimism and pessimism towards a stock. Investors' emotions, under the influence of psychological phenomena, direct people's behavior and, in many cases, make people to deviate from the rational behavi More
        Investor's emotional tendencies indicate the margin of shareholder's optimism and pessimism towards a stock. Investors' emotions, under the influence of psychological phenomena, direct people's behavior and, in many cases, make people to deviate from the rational behavior. The purpose of this study is to use meta-innovative methods to predict the emotional tendencies of investors. In this study, using 97 financial ratios related to 176 companies listed on the Tehran Stock Exchange during the period between 2006 and 2018, investors' emotional tendencies have been predicted with the help of support vector machine (SVM) and decision tree (DT) techniques.To measure the emotional tendencies of investors, four indicators of relative strength, psychological line, trading volume and stock turnover adjustment rate have been applied. Finally, we have combined these indicators with the help of PCA method. Mean absolute error (MAE) and root mean square error (RMSE) values were used to compare predicting methods. The results of data analysis indicate that the prediction error of the support vector machine method is less than the decision tree. Manuscript profile
      • Open Access Article

        66 - The Compare of Power Fire Flies Algorithm Prediction, Decision Making Tree Algorithm and the Support Vector Machine Regression Algorithm for Systematic Risk Predicti
        alireza eslampour roya darabi
        Financial and economic decisions are always at risk due to future uncertainties. Therefore, one of the ways to help investors is to provide investment risk forecasting patterns. The more predictions are closer to reality, the decisions made on the basis of such predicti More
        Financial and economic decisions are always at risk due to future uncertainties. Therefore, one of the ways to help investors is to provide investment risk forecasting patterns. The more predictions are closer to reality, the decisions made on the basis of such predictions will be correct. In this research, the goal of predicting the systematic risk of companies admitted to Tehran Stock Exchange using artificial neural network software and three night-worm algorithms, decision tree algorithm and backup vector machine regression algorithm. For this research, a sample of 92 companies from listed companies in Tehran Stock Exchange during the period 2013 to 2018 has been used. The results obtained from the research hypothesis test showed that the predictive power of systematic risk in the night cream algorithm is more than the decision tree algorithm and the support vector machine regression algorithm, as well as the predictive power of the decision tree algorithm in relation to the backup vector machine regression algorithm It is higher for systematic risk prediction Manuscript profile
      • Open Access Article

        67 - Development of an intelligent method based on fuzzy technical indicators for predicting and trading the euro-dollar exchange rate
        alireza sadeghi amir Daneshvar Mahdi Madanchi Zaj
        Today, the Forex market is the largest financial market in the world. Determining the right strategy for buying or selling in the Forex market is based on predicting the price trend. Therefore, to choose a suitable strategy in Forex, complex meta-heuristic models are us More
        Today, the Forex market is the largest financial market in the world. Determining the right strategy for buying or selling in the Forex market is based on predicting the price trend. Therefore, to choose a suitable strategy in Forex, complex meta-heuristic models are used. In this research, by predicting the market trend and based on trading rules based on fuzzy technical indicators, a new method for investing in the Forex market is presented. For forecasting, a combination of hyper support vector machine (HSVM) algorithm is used and for market classification in three different classes (uptrend, downtrend, sideway) and a dynamic genetic algorithm is used to optimize trading rules. Five fuzzy technical indicators have been used to determine the trading rules. Euro-dollar pair data is used as daily training and test data for a daily period between 2010 and 2019. The results obtained compared to traditional methods have had promising results. Manuscript profile
      • Open Access Article

        68 - Provide a model for predicting noisy stock price time series using singular spectrum analysis, support vector regression with particle swarm optimization and compare it with the performance of wavelet transform, neural network, moving average self-regression process and polynomial regression
        Shaban Mohammadi Hadi Saeidi abdolhosein talebi najafabadi ghasem elahi shirvan
        In this research, a model for analyzing and predicting the noisy financial time series of stock prices using singular spectrum analysis and support vector regression along with particle swarm optimization is presented. Thus, the time series of closed prices of 140 share More
        In this research, a model for analyzing and predicting the noisy financial time series of stock prices using singular spectrum analysis and support vector regression along with particle swarm optimization is presented. Thus, the time series of closed prices of 140 shares of companies in different industries per minute per day for the period from 28 May to 11 June for the years 1392 to 1398 was examined separately from the Tehran Stock Exchange. Also, the performance of the proposed model was compared with the performance of four wavelet transform models with neural network, moving average regression process, polynomial regression and naïve model. Mean absolute error, mean absolute error percentage, and mean square root of error were used as the main performance criteria. The results show that the performance of the proposed model for analyzing and predicting noisy financial time series based on mean absolute error, mean absolute error percentage and mean square root of error is better than other models (including: wavelet transform, moving average self-regression, regression Polynomial is the naïve model). Manuscript profile
      • Open Access Article

        69 - A prediction-based portfolio optimization model using support vector regression
        Mohammad Amin Monadi Amirabbas Najafi
        The purpose of portfolio optimization is to select an optimal combination of financial assets, which should be a guide for investors to achieve the highest returns against the lowest possible risk. On the other hand, one of the key factors in portfolio optimization deci More
        The purpose of portfolio optimization is to select an optimal combination of financial assets, which should be a guide for investors to achieve the highest returns against the lowest possible risk. On the other hand, one of the key factors in portfolio optimization decisions is related to predict the stock prices. To do this, classical nonlinear mathematical and intelligent models such as regression are commonly used. In the present study, a nonlinear model of support vector regression with multiple outputs is applied to reduce the prediction errors. To show the effectiveness of the proposed model, the data of S & P500 index companies in the period 12/09/2016 to 02/08/2021 is used. The results show that the selection of a portfolio based on prediction using multiple vector backup regression due to considering the relationships between outputs simultaneously in terms of Sharp criteria has a better performance than the selection of portfolio based on prediction using regression method. Manuscript profile
      • Open Access Article

        70 - Predicting cash holdings using supervised machine learning algorithms in companies listed on the Tehran Stock Exchange (TSE)
        Saeid Fallahpour Reza Raei Negar Tavakoli
        According to the 22 selected features (which are checked during the research) with machine learning methods, this study predicts the cash holding of companies admitted to the Tehran Stock Exchange. 201 companies were investigated from 1396 to 1400. Multiple linear regre More
        According to the 22 selected features (which are checked during the research) with machine learning methods, this study predicts the cash holding of companies admitted to the Tehran Stock Exchange. 201 companies were investigated from 1396 to 1400. Multiple linear regression, K-nearest neighbor, support vector regression, decision tree, random forest, extreme gradient boosting algorithm and multilayer neural networks are used for prediction. The results show that the multiple linear regression methods provide the k-nearest neighbor of the root mean square error (RMSE) and the mean absolute error (MAE) of the high error. Meanwhile, more complex algorithms, especially support vector regression, achieve higher accuracy; The findings indicated that by reducing to 15 variables, machine learning methods, especially K-nearest neighbor, provided better results. Based on the paired sample t-test, support vector regression has a better performance than other supervised machine learning algorithms except decision tree. Also, the most important variables were company size and capital expenditures (CapEx). The World Uncertainty Index and inflation were also relatively important variables; Therefore, by using the support vector regression algorithm, we may predict the amount of cash to a significant extent. Manuscript profile
      • Open Access Article

        71 - Bankruptcy prediction using hybrid data mining models based on misclassification penalty
        Atiye Torkaman AmirAbbas Najafi
        In recent years, data mining, particularly the support vector machine, has gained considerable interest among investors, managers, and researchers as an effective means of bankruptcy prediction. However, studies indicate that it is highly sensitive to the selection of p More
        In recent years, data mining, particularly the support vector machine, has gained considerable interest among investors, managers, and researchers as an effective means of bankruptcy prediction. However, studies indicate that it is highly sensitive to the selection of parameters and input variables. Hence, the aim of this research is to improve bankruptcy prediction accuracy by combining an advanced support vector machine model with the k-nearest neighbor approach to eliminate erroneous entries. To achieve this, first, by using five financial ratios: current ratio, net profit margin, debt ratio, return on assets, and return of investment from 150 companies listed on the Tehran Stock Exchange during the 10-year period (2010-2019), and k-nearest neighbor algorithm, the training data will be refined. Then, relying on a support vector machine based on classification penalty, a prediction model will be constructed. The parameters will be estimated, and its validity will be assessed using test data. Finally, a comparison will be made between the outcomes of the proposed model and traditional models.The findings demonstrate that the combination of the k-nearest neighbor models and support vector machine reduces the overall prediction error, and the penalty coefficients of the support vector machine exhibit a high level of statistical significance. Manuscript profile
      • Open Access Article

        72 - بررسی مقایسه‌ای توان مدل‌های ترکیب گوسی و ماشین بردار پشتیبان در تشخیص و پیش‌بینی حباب قیمتی
        حمیدرضا کردلویی فرشاد تیموری
      • Open Access Article

        73 - Portfolio Formation Using Diagonal Quadratic Discriminant Analysis and Weighting Based on Posterior Probability
        Saeid Fallahpour H. Pirayesh Shirazinejad
        Stock return forecasting is one of the most important question for investing in Stock markets. Because of the effects of policy, economic, etc., we need moderns and intelligent models to forecast the returns. The main idea in this research is classifying the stocks int More
        Stock return forecasting is one of the most important question for investing in Stock markets. Because of the effects of policy, economic, etc., we need moderns and intelligent models to forecast the returns. The main idea in this research is classifying the stocks into high and low return groups, for this purpose support vector machine (SVM) was used. To elect the best variables for models we used sequential feature selection and in order to evaluate the accuracy of SVM we do the same forecasting with diagonal quadratic discriminant analysis (DQDA). By using paired t-test, we conclude that models have no significant difference. Equal weighted portfolios were created for each models with and without feature selection also, we used posterior probability to weight the portfolio of DQDA with feature selection. The returns were calculated for each portfolio during the years 1388-1391. The simulating results are satisfying and all portfolios’ returns are better than market portfolio. Manuscript profile
      • Open Access Article

        74 - Smart Buying and Selling System Design Based on a Model Consisting of a Support Vector Machine Algorithm and Theory of Trend Channel
        Shapoor Mohammadi Seyyed Ali Mousavi Sarhadi Mohammad Nooribakhsh
        Predicting future prices and consequently higher returns in financial markets has been one of the most important issues. In this study, the design of intelligent systems to buy and sell based on a complex model of support vector machine algorithm and theory of  tre More
        Predicting future prices and consequently higher returns in financial markets has been one of the most important issues. In this study, the design of intelligent systems to buy and sell based on a complex model of support vector machine algorithm and theory of  trend channel been discussed. To achieve the aim of this object, this study was performed in four main steps. In the first phase, range or limits of trend channel at different time intervals were extracted and these limits in the second phase of the experiment was predicted by the algorithm and Support Vector Machine.In the second phase in the range of channel which been predicted in period  of experiment,  sales strategy was defined and implemented. And in the third stage, returns from system designed with efficiency resulting from the use of buy and hold strategies were compared. In all selection criteria as a sample, Intelligent system performance based on the model of combined sales and support vector machine algorithm and theory of trend channel was better than the performance of buy and hold strategy   Manuscript profile
      • Open Access Article

        75 - رویکرد حداقل مربعات ماشین بردار پشتیبان مبتنی بر الگوریتم ژنتیک جهت تخمین رتبه اعتباری مشتریان بانک‌ها
        احمد پویان فر سعید فلاح پور محمدرضا عزیزی
      • Open Access Article

        76 - Comparing Different Feature Selection Methods in Financial Distress Prediction of the Firms Listed in Tehran Stock Exchange
        Mohammad Namazi Mostafa Kazemnezhad M. Mahdi Nematollahi
        Research in financial distress and bankruptcy emphasize the design of more sophisticated classifiers, and less feature (variables) selection and their appropriate methods. In this regard, the purpose of this study is to compare performance of different feature selection More
        Research in financial distress and bankruptcy emphasize the design of more sophisticated classifiers, and less feature (variables) selection and their appropriate methods. In this regard, the purpose of this study is to compare performance of different feature selection methods in financial distress prediction of the companies listed on Tehran Stock Exchange (TSE). In this regard, we investigated and compared five feature selection methods, including t-test, stepwise regression, factor analysis, relief, wrapper subset selection and RFE-SVM feature selection. To obtain comparable experimental results (reliable comparison), three different classifiers (including neural networks, support vector machine and AdaBoost) were used in this study. In overall, the experimental results confirmed the usefulness of variable selection methods and significant difference among level (amount) of different methods performance. In other words, the application of the feature selection methods increases the mean of accuracy, and reduces the occurrence of type I and type II errors. Furthermore, the results indicated that wrapper subset selection method outperforms the other feature selection methods. Manuscript profile
      • Open Access Article

        77 - Ability of Machine Learning Algorithms and Artificial Neural Networks in Predicting Accounting Profit Information Content Before Announcing
        Hossein Alizadeh Majid Zanjirdar Gholam Ali Haji
        Purpose: The aim of this research is to investigate the capability of artificial neural networks and machine learning algorithms, including Support Vector Machine and Random Forest, in predicting the information content of accounting profits before its announcement in a More
        Purpose: The aim of this research is to investigate the capability of artificial neural networks and machine learning algorithms, including Support Vector Machine and Random Forest, in predicting the information content of accounting profits before its announcement in accepted companies on the Tehran Stock Exchange during the period from 2015 to 2020.Methodology: Daily data required for the research were collected using Rahnaward-e-Novin software, and a systematic random sampling method was used to select 88 companies. MATLAB was used for modeling artificial neural networks and machine learning algorithms, and Python code was employed to calculate abnormal returns in neural networks and machine learning algorithms. The information content of profits was measured through the test of the relationship between profits and abnormal returns, based on the model by Porti et al. (2018). The input variables for artificial neural networks and machine learning algorithms are technical indicators. Accuracy, precision, recall, and F-score metrics were used for performance evaluation.Findings: The results of predicting with three models of artificial neural networks, Support Vector Machine, and Random Forest showed that Support Vector Machine and Random Forest had higher accuracy than artificial neural networks in predicting buy, sell, and hold strategies, and only Support Vector Machine had the ability to predict the information content of profits among the three models.Originality / Value: Designing a predictive model for stock price movements in the next trading day using artificial neural networks, Support Vector Machine, and Random Forest as the main innovation of the research. The research findings can increase the speed of information dissemination to the market and attract it, which will reduce the impact of informational asymmetry and information-based trading and ultimately enhance market efficiency. Manuscript profile
      • Open Access Article

        78 - Application and Comparison of Simple Additive Weighting method, Fuzzy Analytic Hierarchy Process and Support Vector Machine in identifying the internal and external factors in SWOT’s analysis
        Ali HaeriaAn Ardekani Hamidreza Koosha fatemeh mirsaeedi
        All organizations, must determine their future path; in other words, they must understand where they stand and where they are heading to. Strategic management is one of the most recognized management approaches for this purpose. One of the most important steps in strate More
        All organizations, must determine their future path; in other words, they must understand where they stand and where they are heading to. Strategic management is one of the most recognized management approaches for this purpose. One of the most important steps in strategic management is recognizing organization’s internal and external factors. If these factors are recognized correctly, they can be used to establish correct and optimal strategies. So far, few researchers have used exact methods for identifying and prioritizing internal and external factors. In this article, we try to use multi criteria (Sample Additive Weighting and Fuzzy Analytic Hierarchy Process techniques) and data mining (support vector machine) for reorganization of internal and external factors. The case study in this research is Water and Sewerage Company of Mashhad. First, organization’s internal and external factors are identified and classified by organization’s higher managers and experts. For applying Sample Additive Weighting and Fuzzy Analytic Hierarchy Process, first, the criteria according to internal and external factor’s definition are determined and criteria’s weights are identified  by Fuzzy Analytic Hierarchy Process also Sample Additive Weighting. Then, by using these weights, the values for all factors are calculated and classified. Using these criteria (attributes) and WEKA software, after data preprocess, factors classified by Support Vector Machine that is one of the most accurate data mining approaches. The results show Support Vector Machine prediction more accurately compared to other techniques. Manuscript profile
      • Open Access Article

        79 - Multiple Simultaneous Damage Detection in large-span bridges
        محمد وحیدی آرمین عطیمی نژاد مریم فیروزی محمد هریسچیان
        This paper presents a powerful two-step method for damage detection of large-span bridges with variable sections. Bridges are one of the basic infrastructures in the field of urban and suburban transportation, and timely detection of damage during its operation is impor More
        This paper presents a powerful two-step method for damage detection of large-span bridges with variable sections. Bridges are one of the basic infrastructures in the field of urban and suburban transportation, and timely detection of damage during its operation is important. Damage in this category of structures will cause service disruption during natural disasters. The presented method is based on the combination of spectral finite element and modal strain energy damage index, as well as the combination of genetic algorithm and support vector regression to detect and estimate the damage severity. One of the efficient methods in the field of wave propagation is the spectral finite element method, which is capable of modeling with high flexibility and detecting micro damage. Vibration-based methods are widely used to detect structural damage, while the modal strain energy damage index has a higher sensitivity in detecting damage among other vibration-based methods. The case study model is the Crowchild Bridge in Western Canada, which has special characteristics in terms of geometry and the characteristics of structural elements. In this research, the modal strain energy damage index has been modified due to the change of cross-section along the girders. Also, support vector regression has been used as a robust technique in estimation damage severity. In order to increase the accuracy and improve the damage severity estimation method, the genetic algorithm is used to optimize the effective parameters of the support vector regression. The combined method of genetic algorithm and support vector regression has been able to estimate the severity of damages in a favorable way. Manuscript profile