• Home
  • Machine Learning
    • List of Articles Machine Learning

      • Open Access Article

        1 - Predicting stock prices using data mining methods.
        Mojtaba Hajigholami
        This article discusses data mining methods for predicting financial markets and analyzing sustainable development in financial matters. It also examines the impact of using data mining methods in the stock market and their effectiveness in this area. The research introd More
        This article discusses data mining methods for predicting financial markets and analyzing sustainable development in financial matters. It also examines the impact of using data mining methods in the stock market and their effectiveness in this area. The research introduces a machine learning approach that generates information using publicly available data and uses this information for accurate prediction. It also explores various data mining methods relevant to financial market analysis, focusing on predicting stock market movements and trends. The study demonstrates that due to the dynamic and variable nature of financial markets influenced by economic, political, and social factors, the use of machine learning and data mining methods can lead to more accurate predictions of stock price movements. Given the extensive and complex data in financial markets, data mining methods have the potential to discover hidden patterns and determine relationships between various variables. Various machine learning algorithms such as artificial neural networks, support vector machines, and random forests, alongside statistical analyses, help improve the analytical capabilities of analysts and investors in making economic decisions. Furthermore, the use of big data and complex analyses has contributed to the development of intelligent trading strategies that can help optimize returns on investments. For example, analysts can enhance the accuracy of their predictions by incorporating sentiment data from social networks into their models. The study emphasizes that sustainable development in financial markets requires a deeper understanding and more precise analysis of data, ultimately leading to stronger data-driven decision-making and trading processes. Manuscript profile
      • Open Access Article

        2 - Presenting a new model for ATM demand scenario
        Alireza Agha Gholizadeh Sayyar Mohamadreza Motadel Alireza Pour ebrahimi
        In today's competitive world, the ability to recognize predict customer demand is an important issue for the success of organizations. And since ATMs are one of the most important channels for cash distribution and one of the most fundamental criteria for assessing the More
        In today's competitive world, the ability to recognize predict customer demand is an important issue for the success of organizations. And since ATMs are one of the most important channels for cash distribution and one of the most fundamental criteria for assessing the level of service to banks,In this paper, the number of referrers to ATM devices is reviewed based on the timing and location of the devices. This article seeks to find a dynamic and functional model for predicting the number of referrers to each ATM depending on the time and location of the device. Hence, 378 ATM machines were used throughout the city of Tehran for a time period of one month, containing 69,418 records. Finally, with the help of clustering of statistical data in spatial and temporal dimensions, this model finally succeeds in learning the pattern in the macro data, and based on the decision tree, the predictor can predict the number of referents to each device, which after the algorithm is presented. In order to improve the quality of banking services and improve the performance of the ATM network, it is proposed to combine the optimal location of ATMs in spatial and temporal dimensions. Manuscript profile
      • Open Access Article

        3 - PEML-E: EEG eye state classification using ensembles and machine learning methods
        Razieh Asgarnezhad Karrar Ali Mohsin Alhameedawi
      • Open Access Article

        4 - Forecast the operating cash flow of accepted companies In Tehran Stock Exchange using machine learning method
        HAMED RAJABZADEH Jamadverdi‎ Gorganli‎ davaji Arash naderian majid ashrafi
        Cash is the fluid financial assets of companies. This feature of cash flow has given it tremendous importance, and the ability to make optimal and timely financial decisions is greatly influenced by this feature. Companies with good domestic cash flows are less likely t More
        Cash is the fluid financial assets of companies. This feature of cash flow has given it tremendous importance, and the ability to make optimal and timely financial decisions is greatly influenced by this feature. Companies with good domestic cash flows are less likely to rely on external financing, and lenders can easily lend to these companies because of their good liquidity. The present study is an applied research in terms of purpose. Also, in this study, the combined data method has been used. Data collection method, document mining method and referring to databases; And the method of data analysis is inferential. In the present study, the required data have been extracted from the new Rahvard software, corporate financial statements and syndication, as well as the Codal site. The statistical population of the present study is all companies listed on the Tehran Stock Exchange in the period2011 to 2018and the financial information of 138 companies has been used over 8 years. The purpose of this study is to predict operational cash with PLSVM and CART artificial intelligence approach in companies listed on the Tehran Stock Exchange. In this study, the company's operating cash ratio was considered as a dependent variable (liquidity) and financial metrics were considered as the initial independent variable. The results of testing the research hypotheses show that the parametric nonlinear law-based artificial intelligence approach has a high ability to predict the liquidity of companies on the Tehran Stock Exchange. Manuscript profile
      • Open Access Article

        5 - The Detection of Financial Statements Fraud According To Audit Report of Financial Statments
        Mehdi Rezaie Mahdi Nazemi Ardakani Alireza Naser Sadrabadi
        This paper aims the detection of financial statements fraud according to audit report of financial statments. The initial research data were collected from a statistical sample consisting of 164 companies, listed in the Tehran Stock Exchange from 2014 to 2017 and select More
        This paper aims the detection of financial statements fraud according to audit report of financial statments. The initial research data were collected from a statistical sample consisting of 164 companies, listed in the Tehran Stock Exchange from 2014 to 2017 and selected through the systematic sampling method. The statistical sample was divided into two separate groups, i.e. fraudulent (1) and non-fraudulent (0) companies. The independent fraud-related variables included 41 financial and nonfinancial variables, selected through theoretical foundations and the research background. The data of variables, collected through the desk method, were finally analyzed through the top five techniques of machine learning, including; the Bayesian networks, the decision tree, artificial neural networks, support vector machine, and combined method. According to the results, all of these techniques were highly capable of fraud detection in financial statements. Moreover, the proposed combined technique outperformed the other techniques in evaluation accuracy and power with an estimation rate of 96.2%. Manuscript profile
      • Open Access Article

        6 - Application of Machine Learning Models for flood risk assessment and producing map to identify flood prone areas: Literature Review
        Parisa Firoozishahmirzadi Shaghayegh Rahimi Zeinab Esmaeili Seraji
      • Open Access Article

        7 - Machine learning clustering algorithms based on Data Envelopment Analysis in the presence of uncertainty
        Reza Ghasempour Feremi Mohsen Rostamy-Malkhalifeh
      • Open Access Article

        8 - A combined machine learning algorithms and Interval DEA method for measuring predicting the efficiency
        Hasan Babaei Keshteli Mohsen Rostamy-Malkhalifeh
      • Open Access Article

        9 - Assessing the stability of maximum entropy prediction for rill erosion modelling
        maryam pournader sadat feiznia hasan ahmadi haji karimi hamidreza peirovan
        Soil erosion management requires providing appropriate solutions that can be achieved with knowing soil erosion situation. The aim of this study, modeling rill erosion potentially by using maximum entropy (MaxEnt) and investigation of its robustness to knowing about ril More
        Soil erosion management requires providing appropriate solutions that can be achieved with knowing soil erosion situation. The aim of this study, modeling rill erosion potentially by using maximum entropy (MaxEnt) and investigation of its robustness to knowing about rill erosion susceptibility in the Golgol watershed, Ilam province. To this purpose, different geo-environmental factors were selected to be employed in the modeling process. In addition, 157 rill erosion events were recorded by a global positioning system (GPS). These events were then classified into two classes of training and validation with a ratio of 70:30. To evaluate model robustness, these classifications were repeated three times, and therefore, three sample datasets (D1, D2, and D3), were prepared. The area under receiver operating characteristics (AUC) curve was used for evaluating the performance of the model. Regarding the robustness results, all of the datasets obtained good AUC values and all of them were robust for both the goodness-of-fit (RAUC =1.3) and prediction performance (RAUC =3.1). In other words, the results demonstrated that the model remained quite stable when the calibration and validation data were changed. In addition, we found that the MaxEnt model is capable to produce rill erosion susceptibility map. Furthermore, based on the sensitivity analysis, it found that the most important components in rill erosion susceptibility modeling are lithology and distance from stream. The adopted methodology can be useful as an efficient approach for land use planning and erosion risk management. Manuscript profile
      • Open Access Article

        10 - Explain the theoretical model of production and development of architectural plans in the interaction of machine learning algorithms and genetics
        reza babakhani azadeh Shahcheraghi hossein zabihi
        Background and Objective: The aim of this study is to explain to the theoretical model in order to find a new solution for the production and development of spatial arrangement of architectural plans based on interactive and integrated methods with the help of machine l More
        Background and Objective: The aim of this study is to explain to the theoretical model in order to find a new solution for the production and development of spatial arrangement of architectural plans based on interactive and integrated methods with the help of machine learning and genetic algorithms. Evolutionary algorithms alone are not effective, but machine learning algorithms can learn plans and form the basis of practical models that can develop and generate new samples through the use of genetic algorithms.Material and Methodology: In this regard, the combined research method includes library studies, collecting raw data, reviewing case samples, and using computational formulas as objective and penalty functions.Findings: Studies show that the genetic algorithm does not have the ability to store memory and on the other hand, the basis of its calculations is jumping and random action that this process is not effective in the production of architectural plans alone and research.Discussion and Conclusion:  findings show that the algorithm Machine learning, due to its exemplary structure, can store and recognize examples, and the genetic algorithm, which is a searchable and scalable algorithm, can produce more examples of architectural plans each time based on the proposed mathematical model. Manuscript profile
      • Open Access Article

        11 - Predict the Financial Limitations of Companies Accepted in Tehran Stock Exchange Using the Relief-Svm-Caiid methods
        maryam salmanian hamid reza vakilifard mohsen hamidian fatemeh sarraf Roya darabi
        Predict the Financial Limitations of Companies Accepted in Tehran Stock Exchange Using the Relief-Svm-Caiid methodsAbstractDiscussion of financial constraints is one of the key issues facing all companies. Predicting financial constraints is an important phenomenon for More
        Predict the Financial Limitations of Companies Accepted in Tehran Stock Exchange Using the Relief-Svm-Caiid methodsAbstractDiscussion of financial constraints is one of the key issues facing all companies. Predicting financial constraints is an important phenomenon for investors, creditors and other users of financial information. This research uses the information of 7 financial years during the period 2012-2017 and using financial information of 213 companies to study the factors affecting financial limitation and its prediction using artificial intelligence algorithm method (backup algorithm classification algorithm and the rule-oriented algorithm Chaid). In the first step, using the Relief Algorithm, among the initial research variables, five variables of the ratio of total operational assets to total assets, the ratio of total debt to the total assets, the kbitwin, the return on sales, and the ratio of institutional owners were selected as important variables in the company's financial constraint, respectively. The results also showed that the three-class support algorithm using selected financial data has the ability to predict future financial constraints with a power greater than 80% and more than the law-governed algorithm.Keywords: financial constraints, Machine learning method, financial variables and corporate governancejel: M41-B26-C63 Manuscript profile
      • Open Access Article

        12 - Distributed Denial of Service Attacks Detection in Internet of Things Using the Majority Voting Approach
        Habibollah Mazarei Marziye Dadvar MohammadHadi Atabakzadeh
        With the ever-increasing number of Internet of Things devices, their security is becoming a very worrying issue. Weak security measures enable attackers to attack IoT devices. One of these attacks is the distributed denial of service(DDOS) attack. Therefore, the existen More
        With the ever-increasing number of Internet of Things devices, their security is becoming a very worrying issue. Weak security measures enable attackers to attack IoT devices. One of these attacks is the distributed denial of service(DDOS) attack. Therefore, the existence of intrusion detection systems in the Internet of Things is of special importance. In this research, the majority voting group approach, which is a subset of machine learning, has been used to detect and predict attacks. The motivation for using this method is to achieve better detection accuracy and a very low false positive rate by combining several machine learning classification algorithms in heterogeneous Internet of Things networks. In this research, the new and improved CICDDOS2019 dataset has been used to evaluate the proposed method. The simulation results show that by applying the majority voting Ensemble method on five attacks from this data set, this method respectively has achieved accuracy of detection 99.9668%, 99.9670%, 100%, 99.9686% and 99.9674% in identifying DNS, NETBIOS, LDAP, UDP and SNMP attacks which better and more stable performance in detecting and predicting attacks have achieved than the basic models . Manuscript profile
      • Open Access Article

        13 - Optimizing Solar Radiation Prediction Based on The Internet of Things Platform in Photovoltaic Power Plant
        Maryam Mahmoudi Neda Ashrafi Khozani Shabnam Nasr Esfahani
        The solar radiation value parameter is one of the most important parameters in determining the output power value of photovoltaic panels. Accurate prediction of this parameter is of special importance for planning in dispatching and load management units. Uncertainty in More
        The solar radiation value parameter is one of the most important parameters in determining the output power value of photovoltaic panels. Accurate prediction of this parameter is of special importance for planning in dispatching and load management units. Uncertainty in the amount of solar radiation and the difficulty of predicting it, managers and designers face economic and managerial challenges. In this research, a prediction method with high accuracy and generality is presented using tree-based methods and improving the performance of these methods with the help of meta-heuristic algorithms. The main emphasis in the proposed method is the lack of over-fitting and high reliability the ability to be used in Internet of Things systems. Meta-heuristic algorithms have been used not only in the optimization of tree-based methods, also in feature selection and instance selection. The use of meta-heuristic methods as the main innovative aspect of this research, not only to obtain the optimal settings of machine learning models, but also to reduce the effect of noises, outliers and low-effective inputs, has helped to improve the quality of the final output. Aadapting the prediction which is considered in this research which was done through innovative fitting function of this research in the optimization of models, makes the final output to be optimal in addition to high accuracy in terms of ease of implementation in the real environments of photovoltaic power plants. The final output is a strong model that has a score of 0.95 with the R-square criterion and is optimal model. Manuscript profile
      • Open Access Article

        14 - Multilayer Perceptron Approach in Breast Cancer Diagnosis
        Emine Avşar Aydin Gözde Saribaş
      • Open Access Article

        15 - Improving of Diabetes Diagnosis using Ensembles and Machine Learning Methods
        Razieh Asgarnezhad Karrar Ali Mohsin Alhameedawi
      • Open Access Article

        16 - A Novel method for assigning Joint power spectrum and Power Selection in device to device networks to improve performance
        Anahita Jabbari S. Mahmood Daneshvar Farzanegan
      • Open Access Article

        17 - A Survey on Applications of Machine Learning in Bioinformatics and Neuroscience
        Narges Habibi Shahla Mousavi
      • Open Access Article

        18 - Identifying Origins of Atmospheric Aerosols using Remote Sensing and Data Mining (Case study: Yazd province)
        Mohamad Kazemi Ali Reza Nafarzadegan Fariborz Mohammadi Ali Rezaei Latifi
        Background and ObjectiveThe Middle East is one of the most important regions in the world for dust production. Iran, located in the Middle East, is exposed to numerous local and trans-regional dust systems due to its location in the arid and semi-arid regions of the wor More
        Background and ObjectiveThe Middle East is one of the most important regions in the world for dust production. Iran, located in the Middle East, is exposed to numerous local and trans-regional dust systems due to its location in the arid and semi-arid regions of the world. Dust storms, in addition to covering arable land and plants with wind-blown materials, destroy fertile lands and reduce biological production and biodiversity, and severely affect the survival of residents. Dust storms are involved in the transmission of dangerous pathogens to humans, air pollution, and damage to respiratory function. Dust storms in Yazd province are relatively common and the average number of days with dust storms in the province reaches 43 days a year. This phenomenon has caused many problems for the people of the province. The main indicators of air quality are the concentration of suspended particles and the aerosol optical depth (AOD) following the occurrence of dust events. Numerous studies have been conducted in the world to identify the centers of dust collection and their origin. However, to the best of the authors’ knowledge, there is no study on the spatial zoning of dust conditions using three algorithms of CART, MARS and TreeNet algorithms as the predictive models. The purpose of this study is to forecast and zoning the potential of different areas for the production of dust aerosols using remote sensing data and data mining models as well as to specify the most important variables on this phenomenon in Yazd province. Materials and Methods The Yazd province lies in a dry region of Central Iran. The province experienced average annual rainfall of about 57 mm and an average annual temperature of about 20 ºC. The maximum temperature experienced in the warmest month of the province is close to 46 ºC. The maximum wind speed in this province is up to 120 kilometres per hour. The Google Earth Engine (GEE) interface (Javascript editor) was applied to collect remote sensing data in order to form three data sets that contain features related to topography, climate, and land surface conditions. These features were employed as the independent variables of the models, which is built by taking advantage of three data mining algorithms, classification and regression tree (CART), multivariate adaptive regression splines (MARS), and TreeNet, to specify the potential of areas for dust production. The dependent variable (target variable) of the models was the aerosol optical depth (AOD), which was acquired from MOD04 AOD retrievals from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard NASA's Terra satellite. The outcomes of the three models for classifying areas with different dust potentials were evaluated under performance criteria, such as R-squared, mean absolute deviation (MAD), the mean square error (MSE), the mean relative absolute deviation (MRAD), and the root means square error (RMSE). Results and Discussion The results showed the variables mostly affecting the dependent variable (AOD) in the MARS model were actual evapotranspiration, soil moisture, and the Palmer drought severity index. The values of R2 and RMSE in the MARS model were equal to 0.72 and 0.02, respectively. Similarly, the features with the highest relative importance according to the TreeNet model were soil moisture, Palmer drought severity index, and actual evapotranspiration. The values of R2 and RMSE in the TreeNet model were equal to 0.75 and 0.019, respectively. The results revealed that the CART model with R2 =0.85, MAD = 0.011, MSE =0.002, MRAD =0.262, and RMSE =0.014 had the best performance compared with the other two data mining models. The soil moisture, elevation, reference and actual evapotranspiration, minimum and maximum temperature, Palmer drought severity index, downward shortwave solar radiation, and wind speed were the most important variables in forecasting the potential of areas for dust production, respectively. Also, the areas with very high, high, moderate, low and very low susceptibility were occupied about 16%, 19%, 26%, 20% and 20% of the Yazd province, respectively. Conclusion All three models, which were based on three data mining algorithms, CART, MARS, and TreeNet, had a good agreement in specifying the most important variables affecting the optical depth of the dust aerosols in the study area. However, these models indicated different priority order for the identified variables in terms of relative importance; Besides, there was a difference in their performance criteria. As mentioned above, the CART model was the best-performing model, of the current study, for specifying the potential of areas for the generation of dust aerosols. According to this model, 25.8% of the province was classified as the moderate-risk of aerosol production, 18.6% of the province as the high-risk of aerosol production, and 16.0% of the study region as the very high-risk of dust aerosols. The high-risk areas are mostly spread in the western and southwestern regions of the Yazd province. Palmer United States golfer (born in 1929) More (Definitions, Synonyms, Translation). http://dorl.net/dor/20.1001.1.26767082.1400.12.1.4.5 Manuscript profile
      • Open Access Article

        19 - Comparison of the classification methods in software development effort estimation
        Sadegh Ansaripour Taghi Javdani Gandomani
        Introduction: The main goal of software companies is to provide solutions in various fields to better meet the needs of customers. The process of successful modeling depends on finding the right and accurate requirements. However, the key to successful development for a More
        Introduction: The main goal of software companies is to provide solutions in various fields to better meet the needs of customers. The process of successful modeling depends on finding the right and accurate requirements. However, the key to successful development for adapting and integrating different developed parts is the importance of selecting and prioritizing the requirements that will advance the workflow and ultimately lead to the creation of a quality product. Validation is the key part of the work, which includes techniques that confirm the accuracy of a set of requirements for building a solution that leads to the project's business objectives. Requirements change during the project, and managing these changes is important to ensure the accuracy of the software built for stakeholders. In this research, we will discuss the process of checking and validating the software requirements.Method: Requirement extraction is conducted by means of discovery, review, documentation, and understanding of user needs and limitations of a system. The results are presented in the form of products such as text requirements descriptions, use cases, processing diagrams, and user interface prototypes.Findings: Data mining and recommender systems can be used to increase the necessary needs, however, another method. of social networks and joint filtering can be used to create requirements for large projects to identify needs.Discussion: In the area of ​​product development, requirements engineering approaches focus exclusively on requirement development. There are challenges in the development process due to the existence of human resources. If the challenges are not seen well at this stage, it will be extremely expensive after the software production. Therefore, in this regard, errors should be minimized and they should be identified and corrected as soon as possible. Now, with the investigations carried out, one of the key issues in the field of requirements is the discussion of validation, which first confirms that the requirements are able to be implemented in a set of characteristics according to the system description, and secondly, a set of essential characteristics. such as complete, consistent, according to standard criteria, non-contradiction of requirements, absence of technical errors, and lack of ambiguity in requirements. In fact, the purpose of validation is to ensure the result that a sustainable and renewable product is created according to the requirements.  Manuscript profile
      • Open Access Article

        20 - Development of a multi-agent recommender system for intelligent shopping assistants
        Ramazan Teimouri Yansari Mojtaba Ajoudani
        Introduction Due to the increasing volume and services available on the web, tools such as recommender systems in websites and applications that can help users find information and services of interest can be provided. For this reason, suitable guidance and suggestions More
        Introduction Due to the increasing volume and services available on the web, tools such as recommender systems in websites and applications that can help users find information and services of interest can be provided. For this reason, suitable guidance and suggestions for users in different choices, according to the user's priorities in different areas of a specific position, have been provided. Method Recommender systems are information systems that assist in the decision-making process by modeling the behavior of users in operational environments in ranking, comparing, selecting and choose items by users, narrowing the information search through high-quality and accurate recommendations. In this research, a multi-agent recommender system is proposed as an intelligent shopping assistant in the process of buying suitable offers. The proposed model is used to analyze the sales data set of a UK-based store containing 1,067,371 records of online sales data. Results By simulating the proposed model, the results of applying the model to the relevant data were analyzed. The proposed model in this research was simulated in MATLAB software version 2022 and the results of applying the proposed model on the data related to the sale of an online shopping were analyzed. According to the results, in this evaluation, the accuracy of the proposed model was 91.5% on average, compared to the neural network model, it was 86.41%, compared to the KNN model, 78.32%, compared to the SOM Ensembles model, 74.38%, compared to the Global Top-N model, 69.78%, compared to the Weighted item-based model, 72.31%, and compared to The Naïve Bayesian model has an accuracy of 59.68%, a higher accuracy in the right suggestion to users. Discussion In this research, while studying recommender systems, the challenges in this field were examined and multi-agent systems were used to provide suggestions and recommendations with high accuracy and quality in ranking, comparison, selection and preferences of users' items in the decision-making process in operational environments. By combining multi-agent systems, multi-agent recommender systems were proposed that can provide suitable recommendations as a purchasing assistant in the purchasing process. The results of applying the proposed model on the data related to the purchase history of the customers of an online shopping showed that the proposed model has a good efficiency in evaluating the parameters used in comparison with the common methods in this property field. Manuscript profile
      • Open Access Article

        21 - Sentiment Analysis of People’s opinion about Iranian National Cars with BERT
        Leila Gonbadi Niloofar Ranjbar
        Introduction: With the development of the internet and social media, people have been actively discussing political and economic issues, and sharing their opinions online. The vast amount of data generated from these online discussions can be analyzed through text minin More
        Introduction: With the development of the internet and social media, people have been actively discussing political and economic issues, and sharing their opinions online. The vast amount of data generated from these online discussions can be analyzed through text mining methods to extract valuable information. One such method is aspect-based sentiment analysis, which allows for the analysis of people's opinions on various aspects of a topic. In this paper, we will focus on the analysis of people's opinions on the Iranian automobile industry and national cars using aspect-based sentiment analysis.Methodology: To achieve our goal, we first introduce a method for extracting different aspects related to national cars. We then use the "BERT" language model to extract vectors for different sentences related to the various aspects and finally use a neural network to classify the sentiments expressed in these sentences as positive, negative, or neutral.Results: The analysis of public opinions on various aspects of Iranian cars showed that the most discussed aspects were design, quality, and price. The sentiment expressed towards design was largely positive, with people expressing admiration for the unique and modern designs of Iranian cars. The sentiment expressed towards quality was mixed, with some people praising the improved quality of national cars, while others criticized the use of low-quality materials. The sentiment expressed towards price was largely negative, with people complaining about the high prices of Iranian cars compared to their foreign counterparts.Discussion: The results of our analysis provide valuable insights into the level of public satisfaction with various aspects of Iranian cars.  The mixed sentiment expressed towards the quality of Iranian cars highlights the need for manufacturers to focus on using high-quality materials to improve the overall quality of their products. The negative sentiment expressed toward the price of Iranian cars suggests that car manufacturers need to find ways to reduce production costs and offer more competitive pricing. In conclusion, aspect-based sentiment analysis can be used as an effective method to analyze public opinions on various aspects of a topic. Our analysis of public opinions on the Iranian automobile industry and national cars provides valuable insights for car manufacturers to improve the design, quality, and pricing of their products. By taking these insights into account, manufacturers can improve their performance and meet the needs and expectations of their customers. Manuscript profile
      • Open Access Article

        22 - Deep Learning: Concepts, Types, Applications, and Implementation
        Fereshteh Aghabeigi Sara Nazari Nafiseh Osati Iraqi
      • Open Access Article

        23 - Presenting a new model for rapid diagnosis of acute respiratory diseases using machine learning algorithms
        Mehran Nezami Avaz Naghipour Behnam Safiri Iranagh
        Corona virus, Severe Acute Respiratory virus and swine flu is a disease caused by acute respiratory syndrome. These viruses require advanced tools to identify dangerous mortality factors with high accuracy due to their immediate spread among humans. Machine learning met More
        Corona virus, Severe Acute Respiratory virus and swine flu is a disease caused by acute respiratory syndrome. These viruses require advanced tools to identify dangerous mortality factors with high accuracy due to their immediate spread among humans. Machine learning methods directly address this issue and are essential tools for understanding and guiding public health interventions. In this article, machine learning is used to investigate demographic and clinical significance. The investigated characteristics include age, gender, fever, countries and clinical details such as cough, shortness of breath, etc. Several machine learning algorithms have been implemented and applied on the collected data, the K-Nearest Neighbor algorithm works with the highest accuracy (more than 97%) to predict and select features that correctly represent the status of viruses. Manuscript profile
      • Open Access Article

        24 - Evaluation of Intelligent and Statistical Prediction Models for Overconfidence of Managers in the Iranian Capital Market Companies
        Shokoufeh Etebar Roya Darabi Mohsen Hamidiyan Seiyedeh Mahbobeh Jafari
      • Open Access Article

        25 - An Algorithmic Trading system Based on Machine Learning in Tehran Stock Exchange
        Hamidreza Haddadian Morteza Baky Haskuee Gholamreza Zomorodian
      • Open Access Article

        26 - Presenting the smart pattern of credit risk of the real banks’ customers using machine learning algorithm.
        Hojjat Tajik Ghodratollah Talebnia Hamid Reza Vakili Fard Faegh Ahmadi
      • Open Access Article

        27 - Provide an improved factor pricing model using neural networks and the gray wolf optimization algorithm
        Reza Tehrani Ali Souri Ardeshir Zohrabi Seyyed Jalal Sadeghi Sharif
      • Open Access Article

        28 - Developing Financial Distress Prediction Models Based on Imbalanced Dataset: Random Undersampling and Clustering Based Undersampling Approaches
        Seyed behrooz Razavi ghomi Alireza Mehrazin Mohammad reza shoorvarzi Abolghasem Masih Abadi
        So far, distress prediction models have been based on balanced, such sampling is not consistent with the reality of the statistical community of companies. If the data are balanced, the bias in sample selection may lead to an underestimation of typeI error and an overes More
        So far, distress prediction models have been based on balanced, such sampling is not consistent with the reality of the statistical community of companies. If the data are balanced, the bias in sample selection may lead to an underestimation of typeI error and an overestimation of the typeII error of models. Although imbalanced data-based models are compatible with reality, they have a higher typeI error compared to balanced data-based models. The cost of typeI error is more important to Beneficiaries than the cost of typeII error. In this study, for reducing typeI error of imbalanced data-based models, random and clustering-based undersampling were used. Tested data included 760 companies since 2007-2007 with 4 different degrees and the results of the H1 to H3 test represented them. In all cases of the typeI error, typeII error of balanced data-based models were lower and more, respectively, compared to imbalanced data-based models; also, in most cases, the geometric mean of balanced data-based models was higher compared to imbalanced data-based models, respectively. The results of testing H4 to H6 show that in most cases, typeI error, typeII error and the geometric mean criterion of models based on modified imbalanced data were less, more, and more, respectiively compared to the models based on imbalanced data, in other words, applying Undersampling methods on imbalanced training data led to a decrease in typeI error and an increase in typeII error and geometric mean criteria. As a result using models based on modified imbalanced data is suggested to Beneficiaries Manuscript profile
      • Open Access Article

        29 - Predicting the Top and Bottom Prices of Bitcoin Using Ensemble Machine Learning
        Emad Koosha Mohsen Seighaly Ebrahim Abbasi
      • Open Access Article

        30 - Designing a Trading Strategy to Buy and Sell the Stock of Companies Listed on the New York Stock Exchange Based on Classification Learning Algorithms
        Nasser Heydari Majid Zanjirdar Ali Lalbar
        This research investigated the development of a stock trading strategy for companies on the New York Stock Exchange (NYSE), a prominent global market. Data was acquired from established libraries and the Yahoo Finance database. The model employed technical analysis indi More
        This research investigated the development of a stock trading strategy for companies on the New York Stock Exchange (NYSE), a prominent global market. Data was acquired from established libraries and the Yahoo Finance database. The model employed technical analysis indicators and oscillators as input features. Machine learning classification algorithms were used to design trading strategies, and the optimal model was identified based on statistical performance metrics. Accuracy, recall, and F-measure were utilized to evaluate the classification algorithms. Additionally, advanced statistical methods and various software tools were implemented, including Python, Spyder, SPSS, and Excel. The Kruskal-Wallis test was employed to assess the statistical differences between the designed strategies. A sample of 41 actively traded NYSE companies across diverse sectors such as financial services, healthcare, technology, communication services, consumer cyclicals, consumer staples, and energy were chosen using a filter-based approach on June 28th, 2021. The selection criteria included a market capitalization exceeding $200 billion and an average daily trading volume surpassing 1 million shares. Evaluation metrics revealed that the designed random forest trading strategy achieved a good fit with the data and exhibited statistically significant differences from other strategies based on classification learning algorithm. Manuscript profile
      • Open Access Article

        31 - Early Warning Model for Solvency of Insurance Companies Using Machine Learning: Case Study of Iranian Insurance Companies
        Saeed Naseri Khezerlou Atousa Goodarzi
        Stakeholders of an organization avoid undesirable outcomes caused by ignoring the risks. Various models and tools can be used to predict future outcomes, aiming to avoid the undesirable ones. Early warning models are one of the approaches that could help them in doing s More
        Stakeholders of an organization avoid undesirable outcomes caused by ignoring the risks. Various models and tools can be used to predict future outcomes, aiming to avoid the undesirable ones. Early warning models are one of the approaches that could help them in doing so. This study focuses on developing an early warning system using machine learning algorithms for predicting solvency in the insurance industry. This study analyses 23 financial ratios from Iranian general insurance companies listed on the Tehran Stock Exchange between 2015 and 2020. The model uses Decision Tree, Random Forest, Artificial Neural Networks, Gradient Boosting Machine and XGBoost algorithms, with Boruta as a feature selection method. The dependent variable is the solvency margin ratio, and the other 22 ratios are the independent variables, which Boruta reduces to 7 variables. Firstly, the performance of the machine learning models on two datasets, one with 22 independent variables and one with 7, is compared based on RMSE values. The XGBoost algorithm performs the best on both data sets. Additionally, the study predicts the 2020 values for 19 insurance companies, performs stage classifications, and compares actual stages to predicted stages. In this analysis, Random Forest has the best estimate accuracy on both data sets, while Gradient Boosting Machine has the best estimate accuracy on the Boruta data set. Finally, the study compares the machine learning models' results in terms of capital adequacy classification, where Random Forest performs the best on both data sets, and Gradient Boosting Machine on the Boruta data set. Manuscript profile
      • Open Access Article

        32 - Examining Financial Performance and Corporate Governance in Tehran Stock Exchange: A Hybrid Machine Learning and Data Envelopment Analysis Approach
        Pooneh Noparvar Saravi Morteza Bagheri Seyed Sadegh Hadian
        In the backdrop of an ever-evolving global business landscape and intense market competition, companies are faced with the imperative of strategically managing factors that influence their financial performance. This research delves into the intricate relationship betwe More
        In the backdrop of an ever-evolving global business landscape and intense market competition, companies are faced with the imperative of strategically managing factors that influence their financial performance. This research delves into the intricate relationship between financial performance enhancement and corporate governance, with particular attention to the mediating role of human capital. The study centers its investigation on companies listed on the Tehran Stock Exchange and comprises a comprehensive sample of 140 top-level managers. A composite sampling approach, comprising a simple random sampling technique and Morgan's table, was employed to judiciously select a representative cohort of 103 participants. In the pursuit of rigorous academic analysis, the research leverages a goal-oriented, applied methodology, employing a descriptive survey design and a quantitative approach. The primary data for the study were methodically collected through rigorously designed and standardized questionnaires. Subsequent to data acquisition, a meticulous analytical process was undertaken using the Partial Least Squares (PLS) software, aligning with the latest developments in quantitative research techniques. The results stemming from hypothesis testing offer compelling insights into the dynamic relationship between corporate governance, human capital, and financial performance enhancement. Our findings convincingly demonstrate a significant positive impact of both corporate governance and human capital on the enhancement of financial performance in the context of Tehran Stock Exchange's listed companies. Furthermore, the empirical evidence strongly suggests that human capital plays a pivotal mediating role in the relationship between corporate governance practices and financial performance improvements. This study, in its pursuit of academic rigor, underscores the effectiveness of a novel hybrid approach, thoughtfully integrating machine learning and data envelopment analysis, to comprehensively examine the intricate interplay between financial performance enhancement and corporate governance within the context of the Tehran Stock Exchange's listed companies. The study contributes to the evolving body of literature in this domain and provides valuable insights for practitioners, policymakers, and researchers. Manuscript profile
      • Open Access Article

        33 - Machine learning algorithms for time series in financial markets
        Mohammad Ghasemzadeha Naeimeh Mohammad-Karimi Habib Ansari-Samani
      • Open Access Article

        34 - Classification of papaya fruit based on maturity using machine learning and transfer learning approach
        mohammad ghorbani Mostafa Ghazizadeh-Ahsaee Kazem Jafari Naeimi
        Grading and packing fruits based on visual inspections can be time-consuming, destructive, and unreliable. The objective of the conducted research is to provide an intelligent, fast and reliable classification method to detect the maturity of papaya fruit in three level More
        Grading and packing fruits based on visual inspections can be time-consuming, destructive, and unreliable. The objective of the conducted research is to provide an intelligent, fast and reliable classification method to detect the maturity of papaya fruit in three levels: immature, partially mature and mature. The total number of images used in this article is 300 images, 100 images have been collected for each level. In this paper, the use of two approaches, machine learning and transfer learning, is proposed to classify papaya fruit maturity status. The machine learning approach includes the use of three feature descriptors and three different classifiers, which are: local binary pattern (LBP), Gray Level Cooccurrence Matrix (GLCM), histogram of oriented gradients (HOG), k-nearest neighbor (KNN) classification algorithm, support vector machine(SVM) and Naïve Bayes classification algorithm. Transfer learning methods include the use of six pre-trained deep learning models Alexnet, Googlenet, Resnet101, Resnet50, Resnet18, VGG19. KNN classifier using HOG feature descriptor has achieved 95.4% accuracy and 3:52 seconds training time. The classifier based on transfer learning approach VGG19 was able to record better performance among other deep learning networks by obtaining 100% accuracy and training time of 10:42 seconds. Two classification methods using machine learning and transfer learning methods have been able to obtain accuracy of 95.4% and 100%, respectively, which are 0.7% and 6% more than the existing proposed methods. Manuscript profile
      • Open Access Article

        35 - Identifying the influencing factors in customer churn of Kurdistan Telecommunications Company and presenting models for predicting churn using machine learning algorithms
        vida sadeghi Anvar Bahrampour Seyed Ali Hosseini
        The main sources of income and assets are important for any organization. With this view, companies have started to do more to maintain health. Since in many companies the cost of acquiring a new customer is much higher than actual customer satisfaction, customer churn More
        The main sources of income and assets are important for any organization. With this view, companies have started to do more to maintain health. Since in many companies the cost of acquiring a new customer is much higher than actual customer satisfaction, customer churn has become the main area of evaluation for these companies. Client-facing companies, including those active in the technology industry, are facing a major challenge due to customer attrition. With the rapid development of the telecommunications industry, dropout prediction becomes one of the main activities in gaining a competitive advantage in the market. Predicting customer churn allows operators a period of time to remediate and implement a series of preventative measures before customers migrate to other operators. In this research, a decision support system for predicting and estimating the churn of customers of Kurdistan Telecommunication Company (with 52,900 subscribers) with different data-mining and machine methods (including simple linear regression (SLR), multiple linear regression (MLR). Polynomial regression. (PR), logistic regression, artificial neural networks, Adabust and random forest) are presented. The results of the evaluations carried out on the data set of the Kurdistan Province Telecommunication Company, the high performance of artificial neural network methods with 99.9% accuracy, Adabust with 99.9% accuracy, 100% accuracy and random forest It shows 100% with accuracy. Manuscript profile
      • Open Access Article

        36 - IoT-Based Disease Prediction and Diagnosis Systems
        mostafa Sarkabiri
      • Open Access Article

        37 - A Non-deterministic CNN-LSTM Hybrid Model for Bitcoin Cryptocurrency Price Prediction
        علی علی جماعت سید محسن میرحسینی
        AbstractIn today's society, investment diversity has become very important. People reduceinvestment risk by diversifying their portfolios. Bitcoin has gained muchpopularity as one of the digital capitals and has been included in the investmentportfolio of individuals an More
        AbstractIn today's society, investment diversity has become very important. People reduceinvestment risk by diversifying their portfolios. Bitcoin has gained muchpopularity as one of the digital capitals and has been included in the investmentportfolio of individuals and institutions. Bitcoin price prediction is essential fordetermining price trends and transactions. For this purpose, various traditionalmethods as well as methods based on machine learning have been presented, eachof which has its own advantages and disadvantages. Recently, the use of hybridmodels has received attention. Combined methods have good efficiency and usethe advantages of combined techniques. This paper presents a hybrid methodbased on a deep convolutional neural network and recurrent neural network withprobabilistic dropout. Eliminating possible randomness leads to the regularizationof learning, avoids overfitting, and reduces model error. The results of theexperiments show that the proposed method has a higher accuracy than thecompared methods in predicting the price of Bitcoin. Manuscript profile
      • Open Access Article

        38 - Detected Source-based fake news via Word2vec algorithm
        Hamid Sharifi Heris Jafar Sheykhzadeh
      • Open Access Article

        39 - Detection Anomaly of Network Datasets with Honeypots at Industrial Control System
        Abbasgholi pashaei Mohammad Esmaeil akbari mina zolfy Asghar charmin
      • Open Access Article

        40 - Machine Learning-based Industrial LAN Networks Using Honeypots
        Pashaei Abbasgholi mina zolfy
      • Open Access Article

        41 - CKD-PML: Toward an Effective Model for Improving Diagnosis of Chronic Kidney Disease
        Razieh Asgarnezhad Karrar Ali Mohsin Alhameedawi
      • Open Access Article

        42 - Protein Secondary Structure Prediction: a Literature Review with Focus on Machine Learning Approaches
        Leila Khalatbari Mohammad Reza Kangavari
      • Open Access Article

        43 - Discovering a Way to Analyze Customer Emotions on Social Media for use in Advertising Systems
        leila khajehvand Abbas Toloie Eshlaghy Morteza Mosakhani
        Recently,social networks have attracted special attention. In various social networks, users are constantly expressing their public as well as private opinions on various topics. Twitter is one of these social networks that has become very popular in the last decade. Th More
        Recently,social networks have attracted special attention. In various social networks, users are constantly expressing their public as well as private opinions on various topics. Twitter is one of these social networks that has become very popular in the last decade. This social network provides organizations with a fast and effective way to analyze customers' feelings, views, and criticisms of market success. Emotional analysis is a process in which people's opinions, feelings, and attitudes about a particular subject are extracted. There has been a lot of research on emotion analysis based on user comments, documents and articles. Analysis of what is being said is very different from Twitter data, because Twitter tweets are limited to 280 characters and force users to express their feelings concisely. The best results in emotion classification are obtained from machine learning techniques such as simple Bayes and support vector machine. In this research, a method for analyzing emotions in social networks is presented. In this regard, we have tried to improve the classification of text by Bayesian method to some extent by focusing on the stages of data preprocessing and feature selection.users' feelings are analyzed. The classification problem has been formulated and solved using the latest achievements in the field of machine learning. . To evaluate the proposed method in this dissertation is from the Twitter data set scenario. The proposed method is compared with other classification methods. Has shown the best performance. Manuscript profile
      • Open Access Article

        44 - Short-Term Tuberculosis Incidence Rate Prediction for Europe using Machine learning Algorithms
        Jamilu Yahaya Maipan-uku Nadire Cavus Boran Sekeroglu
      • Open Access Article

        45 - Humanitarian Smart Supply Chain: Classification and New Trends for Future Research
        Fatemeh Kheildar Parvaneh Samouei Jalal Ashayeri
        During the crisis, relief supply chain management (also known as humanitarian supply chain management) has received great attention these days. The core questions facing many humanitarian organizations are: where are their strengths/weaknesses? Are they positioned to be More
        During the crisis, relief supply chain management (also known as humanitarian supply chain management) has received great attention these days. The core questions facing many humanitarian organizations are: where are their strengths/weaknesses? Are they positioned to be effective in their supply chain system? What challenges do you need to overcome? What do they need to do to take advantage of the technological opportunities offered nowadays? These questions have been addressed them extensively during the past two decades. This paper tries to review and classify some of the papers carried out in key areas of the humanitarian supply chain such as location, certainty and uncertainty, relief teams and injured (patient) classification, machine learning, queue theory, the employed research methods, solution methods, and the type of objective functions. The paper begins first to define what the “humanitarian” ecosystem may include, and which actors play important roles. After, certain critical views of the humanitarian relief supply chain are examined. The critical views of the humanitarian relief supply chain would help researchers to introduce further research orientations and areas to overcome crises in the real world. Manuscript profile
      • Open Access Article

        46 - Diabetes detection via machine learning using four implemented spanning tree algorithms
        Yas  Ghiasi Mehdi Seif Barghy Davar PISHVA
        This paper considers an accurate and efficient diabetes detection scheme via machine learning. It uses the science of data mining and pattern matching in its diabetes diagnosis process. It implements and evaluates 4 machine learning classification algorithms, namely De More
        This paper considers an accurate and efficient diabetes detection scheme via machine learning. It uses the science of data mining and pattern matching in its diabetes diagnosis process. It implements and evaluates 4 machine learning classification algorithms, namely Decision tree, Random Forest, XGBoost and LGBM. Then selects and introduces the one that performs the best towards its objective using multi-criteria decision-making methods. Its results reveal that Random Forest algorithm outperformed other algorithms with higher accuracy. It also examines the details of features that have a greater effect on diabetes detection. Considering that diabetes is one of the most deadly, disabling, and costly diseases observed today, its alarmingly increasing rates, and difficulty of its diagnosis because of many vague signs and symptoms, utilization of such approach can help doctors increase accuracy of their diagnosis and treatment schemes. Hence, this paper uses the science of data mining as a tool to gather and analyze existing data on diabetes and help doctors with its diagnosis and treatment process. The main contribution of this paper can therefore be its applied nature to an essential field and accuracy of its pattern recognition via several analytical approaches. Manuscript profile
      • Open Access Article

        47 - ‎Role of Fuzzy Sets on Artificial Intelligence Methods‎: ‎A literature Review
        Cengiz Kahraman Sezi Onar Basar Oztaysi Selcuk Cebi
        Machines can model and improve the human minds capabilities through artificial intelligence. One of the most popular tools of artificial intelligence is fuzzy sets, which can capture and model the vagueness and impreciseness in human thoughts. This paper, first of all, More
        Machines can model and improve the human minds capabilities through artificial intelligence. One of the most popular tools of artificial intelligence is fuzzy sets, which can capture and model the vagueness and impreciseness in human thoughts. This paper, first of all, introduces the recent extensions of ordinary fuzzy sets and then presents a literature review on the integration of fuzzy sets with other artificial intelligence techniques such as automated reasoning, autonomous agents, multi-agent systems, machine learning, case-based reasoning, deep learning, information reasoning, information representation, natural language processing, symbolic reasoning, and neural networks. Graphical illustrations of literature review results are presented for each of these integrated artificial intelligence techniques. The results of a patent search on fuzzy artificial intelligence are also given. Manuscript profile
      • Open Access Article

        48 - Design an Intelligent Multi-agent Computer-aided System for Recommender Systems
        Ramazan Teimouri Yansari Mojtaba Ajoudani Seyed Reza Mosayyebi
      • Open Access Article

        49 - Prediction of Heusler Alloys with Giant Magnetocaloric Effect using Machine-Learning
        Tasnim Gharaibeh Pnina Ari-Gur Elise de Doncker
      • Open Access Article

        50 - Feature Selection in Big Data by Using the enhancement of Mahalanobis–Taguchi System; Case Study, Identifiying Bad Credit clients of a Private Bank of Islamic Republic of Iran
        Shahin Ordikhani Sara Habibi
      • Open Access Article

        51 - Presenting financial bankruptcy risk prediction model of stock and transborder companies using machine learning algorithms
        Mohsen Aali Seyed Alireza Mirarab Baygi Nima Farajian
        Bankruptcy or business failure can have a negative impact on both the company itself and the global economy. In this research, the financial bankruptcy risk prediction of stock and transborder companies has been done using machine learning algorithms, Where the ultimate More
        Bankruptcy or business failure can have a negative impact on both the company itself and the global economy. In this research, the financial bankruptcy risk prediction of stock and transborder companies has been done using machine learning algorithms, Where the ultimate goal is to predict the financial bankruptcy risk of stock exchange and transborder companies. Collective learning is a field of machine learning in which instead of using a model to solve a problem, Use multiple models in combination to increase the output estimation power of the model. Each model is retrained using optimal features. As a result, the accuracy of predicting machine learning model by Stacking method, which is one of the strongest techniques of collective learning, To predict financial bankruptcy risk is higher than similar methods. Investors always want to prevent the deterioration of their capital by anticipating the possibility of a company's bankruptcy. Therefore, they are looking for ways to predict the bankruptcy of companies. Manuscript profile
      • Open Access Article

        52 - Stock trading strategy based on regression learning algorithms
        NAASER HEYDARI مجید زنجیردار Ali Lalbar
        The aim of this study is to develop a stock trading strategy using regression learning algorithms. The researcher utilized the Yahoo Finance database to collect the necessary data using Python programming. Key technical analysis indicators and oscillators were calculate More
        The aim of this study is to develop a stock trading strategy using regression learning algorithms. The researcher utilized the Yahoo Finance database to collect the necessary data using Python programming. Key technical analysis indicators and oscillators were calculated and incorporated into the model. The performance of the regression algorithms was evaluated using indicators such as determination coefficient, mean error of the mean, and square root of the error. Advanced statistical methods and software including Python, Spider, SPSS, and Excel were employed to analyze the differences between the evaluation indices of the designed algorithms. The Kruskal-Wallis test was used for meaningful comparison. Additionally, a diversified research sample consisting of companies from various sectors was chosen to generalize the findings. The selected companies were actively traded on the New York Stock Exchange with an average volume greater than 1 million and a market value larger than 200 trillion dollars. The sample was determined using a filter writing method on 28/06/2021 equal to 41 numbers as the sample of this research . The research was completed by the end of February 2023, and the random forest trading strategy model was identified as the most suitable approach.Keywords: Trading Strategy, Machine Learning, Regression Algorithms, Stock Exchange. Manuscript profile
      • Open Access Article

        53 - A Comprehensive Review on Data-Driven Techniques in Smart Power Grids
        Khalegh Behrouz Dehkordi Homa Movahednejad Mahdi Sharifi
        As a promising vision toward obtaining high reliability and better energy management, nowadays power grid is transferring to the smart grid (SG). This process is changing continuously and needs advanced methods to process big data produced by different segments. Artific More
        As a promising vision toward obtaining high reliability and better energy management, nowadays power grid is transferring to the smart grid (SG). This process is changing continuously and needs advanced methods to process big data produced by different segments. Artificial intelligence methods can offer data-driven services by extracting valuable information which is produced by meter devices and sensors in smart grids. To this end, machine learning (ML), deep learning (DL), reinforcement learning (RL), and deep reinforcement learning (DRL) can be applied. These methods are able to process huge amounts of data and propose an appropriate solution to solve power industry complex problems. In this paper, the state-of-the-art approaches based on artificial intelligence used by smart power grids for applications and data sources are investigated. Also, the role of big data in smart power grids, and its features such life cycle, and efficient services such as forecast, predictive maintenance, and fault detection are discussed. Manuscript profile
      • Open Access Article

        54 - An Improved Tracking-Learning-Detection Algorithm for Low Frame Rate
        Hooman Moridvaisi Farbod Razzazi Mohammad Ali Pourmina Massoud Dousti
        The conventional Tracking-Learning-Detection (TLD) algorithm is sensitive to illumination change and clutter and low frame rate and results in drift even missing. To overcome these shortcomings and increase robustness, by improving the TLD structure via integr More
        The conventional Tracking-Learning-Detection (TLD) algorithm is sensitive to illumination change and clutter and low frame rate and results in drift even missing. To overcome these shortcomings and increase robustness, by improving the TLD structure via integrating mean-shift and co-training learning can be achieved better results undergo low frame rate (LFR) condition and the robustness and accuracy tracking of the TLD structure increases. Because of, the Mean-Shift tracking algorithm is robust to rotation, partial occlusion and scale changing and it is simple to implement and takes less computational time. On the other, the co-training learning algorithm with two independent classifiers can learn changes of the target features in during the online tracking process. Therefore, the extended structure can solve the problem of lost object tracking in LFR videos and other challenges simultaneously. Finally, comparative evaluations of the proposed method to other top state-of-the-art tracking algorithms under the various scenarios from the TB-100 known dataset, demonstrate the superior performance of the proposed algorithm compared to other tracking algorithms in terms of tracking robustness and stability performance. Finally, the proposed structure based on the TLD architecture, in scenarios with the various challenges mentioned, will improve on average about 33% of the results, compared to the traditional TLD algorithm. Manuscript profile
      • Open Access Article

        55 - Utilizing Firefly Algorithm-Optimized ANFIS for Estimating Engine Torque and Emissions Based on Fuel Use and Speed
        Mahmut Dirik
      • Open Access Article

        56 - Body Weight Prediction of Dromedary Camels Using the Machine Learning Models
        N. Asadzadeh M. Bitaraf Sani E. Shams Davodly J. Zare Harofte M. Khojestehkey S. Abbaasi A. Shafie Naderi
      • Open Access Article

        57 - Predicting product choice by customers based on neuromarketing with Chaotic salp swarm algorithm
        Marzieh Maleki Zahra Dasht Lali
        Understanding how consumers make decisions is one of the important things in customer behavior that is addressed by neuromarketing. The purpose of this article is to present a new solution in neuromarketing by receiving brain signals and extracting and selecting importa More
        Understanding how consumers make decisions is one of the important things in customer behavior that is addressed by neuromarketing. The purpose of this article is to present a new solution in neuromarketing by receiving brain signals and extracting and selecting important features and classification to increase the prediction of product selection by customers. In this article, brain signals from twenty-five participants who have seen the products have been received and characterized by the high-order spectrum method. In order to select the best features, in this article, the swarm algorithm of salp chaos has been presented, which can identify the effective features with high search power, and for the final prediction, different classifications have been used in the form of multiple learning. In the proposed model, the high-order spectra method was applied in extracting the phase information of the electroencephalogram signal in order to investigate the relationship between liking and disliking the product, which included more than seven hundred features. Then feature selection was used with the improved Salp swarm algorithm with logistic chaos mapping and the features were reduced from 742 to 198 features. The results showed that the proposed model was able to have an average accuracy of 75.99% in detecting the choice of users in all products, which shows a 3.75% improvement in the results compared to similar researches. Manuscript profile
      • Open Access Article

        58 - Presenting a Fast Classifier Based on Unsupervised Learning for Diagnosis Diseases
        Najmeh Hosseinpour Afzal Ghaseimi
      • Open Access Article

        59 - A New Hybrid Model of K-Means and Naïve Bayes Algorithms for Feature Selection in Text Documents Categorization
        Ali Allahverdipour Farhad Soleimanian Gharehchopogh
      • Open Access Article

        60 - Sentimental Categorization of Persian News Headlines using Three Machine Learning Techniques Versus Human Categorization
        Vahid Mirzaeian
      • Open Access Article

        61 - A Semi-Supervised Human Action Learning
        Mohsen Tavana Mohammad Mohammadi Hamid Parvin
      • Open Access Article

        62 - Implicit Emotion Detection from Text with Information Fusion
        Nooshin Riahi Pegah Safari
      • Open Access Article

        63 - Presenting a Real Time Method for Automatic Detection of Diabetes Based on Fuzzy Reward-Penalty System
        Najmeh Hosseinpour Mohammad Mosleh Saeed Setayeshi
      • Open Access Article

        64 - Evaluating Factors Affecting Project Success: An Agile Approach
        Mohammad Sheikhalishahi Mohammad Amin Amani Ayria Behdinian
      • Open Access Article

        65 - Analyzing methods and approaches to produce automatic automatic space layouts
        Mohammad Hadi Kaboli seyed aliakbar sadri mohamadreza soleimani Mitra Mirzarezaee
        Aims: This study presents an approach for the automatic spatial arrangements in creating more economical buildings. More than 60 years of studies in the field of production of automatic spatial layouts have proved that architectural layouts are strongly affecting the c More
        Aims: This study presents an approach for the automatic spatial arrangements in creating more economical buildings. More than 60 years of studies in the field of production of automatic spatial layouts have proved that architectural layouts are strongly affecting the course of such benefits; nevertheless, they mainly examined the subject mathematically. The purpose of this research was to provide a new category in the production of architectural layouts while it investigated the related applications. In addition, it compared the existing approaches and methods. Finally, the study introduced a model for automatic generation of spatial arrangements. Methods: From out of 105 reliable national and international databases, 34 studies on the production of spatial arrangements were selected and analyzed using the content analysis method. Findings: The results indicated that the production of automatic spatial layouts could be organized in six approaches from the perspective of problem representation approaches. Additionally, the benefits and applications of each approach examined based on qualitative criteria. Conclusion: At the same time that a general model was provided by the study, the automatic spatial architectural layout design was also established in three different methods of part to whole and whole to part relationships along with the principle of expertise and its applications. Manuscript profile
      • Open Access Article

        66 - A Machine Learning-based Model for predicting Stochastic BTI Effects
        Siavash Esshaghi Mohammad Bazli Arash Esshaghi
      • Open Access Article

        67 - Presenting a Comprehensive Model for Measuring the Liquidity Risk of Banks Listed on the Tehran Stock Exchange (Case Study: Mellat Bank)
        Toraj Azari Mojtaba Tastori Reza Tehrani
         AbstractLack of liquidity management of banks is one of the most important risks for any bank and lack of attention to liquidity risk leads to irreparable consequences. Preventing liquidity risk requires a comprehensive measurement method but liquidity risk is com More
         AbstractLack of liquidity management of banks is one of the most important risks for any bank and lack of attention to liquidity risk leads to irreparable consequences. Preventing liquidity risk requires a comprehensive measurement method but liquidity risk is complicated issue, and this complexity makes it difficult to provide a proper definition. In addition, defining liquidity risk determinants and formulation of the related objective function to measurement its value is a difficult task. To address these problems and assess liquidity risk and its key factors, in this study we propose a model that uses artificial neural networks and Bayesian networks. Design and implementation of this model includes several algorithms and experiments to validate the model. In this paper, we have used Levenberg-Marquardt and Genetic optimization algorithms to teach artificial neural networks. We have also implemented a case study in Bank Mellat to demonstrate the feasibility, efficiency, accuracy and flexibility of the research liquidity risk measurement model.  Manuscript profile
      • Open Access Article

        68 - Applying machine learning models in creation of share optimum portfolio and their comparison
        Mohammad Sarchami Ahmad khodamipour Majid Mohammadi Hadis Zeinali
        Although econometric models are appropriate for describing and evaluating the relationships between variables and statistical inference, but they have some limitations for financial analysis. Many efforts have been made to model nonlinear relationships in financial data More
        Although econometric models are appropriate for describing and evaluating the relationships between variables and statistical inference, but they have some limitations for financial analysis. Many efforts have been made to model nonlinear relationships in financial data using machine learning technologies. The purpose of this study is to apply machine learning models to form optimal stock portfolios and compare their performance. The statistical sample of the present study consists of 156 companies listed in Tehran Stock Exchange during the period 2009-2018. After data collection, the intended deep learning models in Anaconda software and Python programming language were tested, and then the ability of each model was determined by return evaluation, composite return, trenors and jensens criteria to form an optimal stock portfolio. According to the free-risk and market return rate, forming portfolio by investor to more profit than these two rates and portfolio valuation results of trenors and jensens indexes, it was concluded that the deep Convolutional Neural Network is able to for optimal portfolio. According to this reasoning, the long short-term memory model is not capable of optimal portfolio formation. Manuscript profile
      • Open Access Article

        69 - Presenting the combined algorithm of machine learning and the combination of risk metrics and fuzzy theory in choosing an investment portfolio
        danial mohammadi Seyed jafar Sajadi Emran Mohammadi naeim shokri
        The current research was conducted to find the optimal portfolio for investing in stock exchange stocks, and one of the methods that is currently very popular among analysts and researchers in this field is methods based on artificial intelligence, followed by methods a More
        The current research was conducted to find the optimal portfolio for investing in stock exchange stocks, and one of the methods that is currently very popular among analysts and researchers in this field is methods based on artificial intelligence, followed by methods aimed at reducing risk metrics. The aim of the current research is to form a portfolio using machine learning methods, risk measurement and its combination with fuzzy theory, which has a better return than the average return of the market. The output of each method is entered into the random forest algorithm and prediction is made by this algorithm, and in the last step, the prediction output is entered into the value-at-risk and value-at-risk optimization model with the fuzzy theory approach to form the capital portfolio. Shares information is daily and its time period is from the beginning of 2014 to the middle of 2018. At the end of each of these methods and steps, it was compared with the real return of the market. the CVAR risk measure has a better ability than the VAR risk measure, and the random forest algorithm among the used machine learning algorithms has achieved better results in choosing the investment portfolio. Manuscript profile
      • Open Access Article

        70 - The comparative study of the accuracy of prediction of Support Vector Machine, Bayesian Network and C5 models in prediction underpricing for listed companies at TSE and OTC
        bita dehghan khanghahi jamal bahrisales Saeed Jabbarzadeh Kangarlouie ali ashtab
        Previous research into the short-term performance of the initial public offering reflects the fact that short-term stocks perform better than the market in the short run. Statistical models have been able to make good predictions about the performance of new stocks, but More
        Previous research into the short-term performance of the initial public offering reflects the fact that short-term stocks perform better than the market in the short run. Statistical models have been able to make good predictions about the performance of new stocks, but the limiting assumptions of some of these models have been effective! So, other ways to deal with these limitations and improve forecasting performance were introduced. Since initial public offering is an important issue in the capital market, in this study, we investigate different classification models to achieve a model that has high efficiency and accuracy in predicting underpricing of initial public offering (IPO) stocks. To achieve the research goal, systematic elimination sampling method is considered to select 84 companies among all listed companies at Tehran Stock Exchange (TSE) and 54 companies among all listed companies at Over the Counter (OTC) from 2003 to 2017. The results showed that support vector machine (SVM), Bayesian Network and C5 decision tree models are highly accurate in predicting underpricing. The results also showed that the influential variables included assets growth, auditor tenure, auditor specialty in the industry, financing ratio, P/E, CFO ratio, ROA, stock price fluctuate, growth opportunity and audit firm size. Manuscript profile
      • Open Access Article

        71 - Comparison of different machine learning models in stock market index forecasting
        maryam sohrabi Seyed Mozaffar mirbargkar Ebrahim Chirani SINA KHERADYAR
        Predicting time series of financial markets is a challenging issue in the field of specialized studies of time series and has attracted the attention of many researchers. Due to the presence of big data, this issue has led to the growth of developments in the field of m More
        Predicting time series of financial markets is a challenging issue in the field of specialized studies of time series and has attracted the attention of many researchers. Due to the presence of big data, this issue has led to the growth of developments in the field of machine learning models. Due to the importance of this issue, in this study, by using the comparison of different machine learning models such as random forest approaches, support vector machine, artificial neural network and deep learning-based recurrent neural networks to investigate the ability of different machine learning models in prediction. The total index of Tehran Stock Exchange during the period 2013 to 2020 has been discussed. The prediction results of 1, 3 and 6 day courses for the out-of-sample period show that the machine learning method based on the long short-term memory (LSTM) network, a recurrent neural networks, has a better result compared to other models. Manuscript profile
      • Open Access Article

        72 - Development a new ensemble learning approach for stock portfolio selection using multiclass SVM and genetic algorithm
        nasrin bagheri mazraeh amir Daneshvar mehdi madanchi zaj
        The volume and speed of transactions in financial markets has increased significantly and has undergone extensive changes nowadays. Facing with increasing, decreasing or fluctuating trends in the stock market, determining the right trading strategy is very important. Th More
        The volume and speed of transactions in financial markets has increased significantly and has undergone extensive changes nowadays. Facing with increasing, decreasing or fluctuating trends in the stock market, determining the right trading strategy is very important. Therefore, complex meta-heuristic models are used for choosing a suitable strategy. In this research, an attempt is made to develop a new method of selecting and optimizing the stock portfolio based on the ensemble learning algorithm and genetics in order to select the best trading strategy to achieve greater returns and less risk. A combination of a six-class support vector machine (SVM) algorithm is used to predict returns and receive a buying signal; besides, a dynamic genetic algorithm is used to optimize trading rules. In this study, collective learning methods including Bagging, one of the algorithms based on Ensemble Learning, have been used to improve the accuracy of classification of returns. Data related to each share and fundamental variables in a daily time interval between years 1390 to 1399 is used as training and test data. The obtained results, comparing to traditional methods, are promising. Manuscript profile
      • Open Access Article

        73 - Stock portfolio optimization of companies listed on the Tehran Stock Exchange based on a combination of two-level ensemble machine learning methods and multi-objective meta-innovative algorithms based on market timing approach
        sanaz faridi amir daneshvar Mahdi Madanchi Zaj Shadi Shahverdiani
        In this article, using the market timing approach and homogeneous and inhomogeneous collective learning methods, the purchase, maintenance and sales signal and market forecast are presented based on the basic characteristics, technical characteristics and time series of More
        In this article, using the market timing approach and homogeneous and inhomogeneous collective learning methods, the purchase, maintenance and sales signal and market forecast are presented based on the basic characteristics, technical characteristics and time series of returns of each company in the 100 days leading to the current day. . Based on this, 208 companies were selected as active companies between 1390 and 1399 To teach data by two-level ensemble learning machine (HHEL) and market trend forecasting based on market timing strategy, use data from 5 years 1390 to 1394 and to test the data as stock portfolio optimization based on stock portfolio maximization and risk minimization. The investment portfolio uses MOPSO and NSGA II algorithms and is compared with the obtained investment portfolio with the buy and hold strategy. The results showed that the MOPSO algorithm achieved the highest stock portfolio yield with 96.437% compared to the NSGA II algorithm with a yield of 91.157% and the same investment method with a yield of 13.058%. Also, the portfolio risk in NSGA II algorithm was much lower than the portfolio risk in MOPSO algorithm with 0.792% and 1.367%, respectively Manuscript profile
      • Open Access Article

        74 - Comparison of multiple linear regression and machine learning algorithms inPredicting cash holdings
        samira seif mostafa yousofi tezerjan
        In recent years, in the financial literature, more attention has been paid to the level of cash holding of companies. So; Forecasting is important to determine the optimal level of cash holding. In this research, using linear and non-linear methods and 13 influential in More
        In recent years, in the financial literature, more attention has been paid to the level of cash holding of companies. So; Forecasting is important to determine the optimal level of cash holding. In this research, using linear and non-linear methods and 13 influential input variables, the amount of cash in 103 companies admitted to the Iran Stock Exchange during the years 2013 to 2021 has been predicted. The methods used include multiple linear regression (MLR), k nearest neighbor (KNN), support vector machine (SVM) and multi-layer neural networks (MLNN) for prediction. The results show that the traditional method of multiple linear regression has not been successful in predicting cash, but machine learning algorithms have been superior with an accuracy of 0.99. The variables of profit per share, the ratio of current assets to current liabilities and the ratio of short-term debt to total assets have had a greater impact in all algorithms. Therefore, managers can use advanced machine learning algorithms to predict the future cash flow of companies. Manuscript profile
      • Open Access Article

        75 - Predicting cash holdings using supervised machine learning algorithms in companies listed on the Tehran Stock Exchange (TSE)
        Saeid Fallahpour Reza Raei Negar Tavakoli
        According to the 22 selected features (which are checked during the research) with machine learning methods, this study predicts the cash holding of companies admitted to the Tehran Stock Exchange. 201 companies were investigated from 1396 to 1400. Multiple linear regre More
        According to the 22 selected features (which are checked during the research) with machine learning methods, this study predicts the cash holding of companies admitted to the Tehran Stock Exchange. 201 companies were investigated from 1396 to 1400. Multiple linear regression, K-nearest neighbor, support vector regression, decision tree, random forest, extreme gradient boosting algorithm and multilayer neural networks are used for prediction. The results show that the multiple linear regression methods provide the k-nearest neighbor of the root mean square error (RMSE) and the mean absolute error (MAE) of the high error. Meanwhile, more complex algorithms, especially support vector regression, achieve higher accuracy; The findings indicated that by reducing to 15 variables, machine learning methods, especially K-nearest neighbor, provided better results. Based on the paired sample t-test, support vector regression has a better performance than other supervised machine learning algorithms except decision tree. Also, the most important variables were company size and capital expenditures (CapEx). The World Uncertainty Index and inflation were also relatively important variables; Therefore, by using the support vector regression algorithm, we may predict the amount of cash to a significant extent. Manuscript profile
      • Open Access Article

        76 - Development of stock portfolio trading systems using machine learning methods
        Ali Heidarian Mohadeseh Moradi Mehr Ali Farhadian
        Investment portfolio theory is an important foundation for portfolio management, which is a well-studied but not saturated topic in the academic community. Integrating return forecasting in investment portfolio formation can improve the performance of portfolio optimiza More
        Investment portfolio theory is an important foundation for portfolio management, which is a well-studied but not saturated topic in the academic community. Integrating return forecasting in investment portfolio formation can improve the performance of portfolio optimization model. Since machine learning models have shown a superiority over statistical models, in this research, a approach of forming the stock portfolio in two stages is presented. first step, by implementing neural network, suitable stocks are selected for purchase, in the second step, using the (MV) model, the optimal weight in investment portfolio is determined for them. In particular, the stages of selecting suitable stocks and forming a stock portfolio are the two main stages of the model developed in this research. first step, a convolutional neural network model is proposed to predict stock buy and sell points for the next period.second step, stocks that are labeled as buys are selected as stocks suitable for buying, and MV model is used to determine their optimal weight in the stock portfolio. The results obtained using 5 shares of Tehran stock market as a study sample show that the efficiency and Sharpe ratio of proposed method is significantly better than traditional methods (without filtering suitable stocks) Manuscript profile
      • Open Access Article

        77 - Risk Classification of Imbalanced Data for Car Insurance Companies: Machine Learning Approaches
        Farzan Khamesian Maryam Esna-Ashari Eric Dei Ofosu-Hene Farbod Khanizadeh
      • Open Access Article

        78 - The use of support vector machine and Naive Bayes algorithms and its combination with risk measure and fuzzy theory in the selection of stock portfolio
        Danial Mohammadi Emran Mohammadi Naeim Shokri Nima Heidari
        Purpose: The purpose of the current research is to create an optimal portfolio using machine learning algorithms and fuzzy theory, which has a better return than the average return of the market (total index of the stock exchange).Research Methodology:In this article, t More
        Purpose: The purpose of the current research is to create an optimal portfolio using machine learning algorithms and fuzzy theory, which has a better return than the average return of the market (total index of the stock exchange).Research Methodology:In this article, the stocks of the selected companies are classified in the first stage using the two introduced algorithms. In the next step, stocks that entered the positive class are predicted for the next trading day with the help of random forest algorithm. For each company, three predictions are made, which are the inputs of fuzzy method optimization. Optimization is done with the aim of minimizing the risk with risk measures of value at risk and value at conditional risk. Shares information is five years old (daily) and its time period is from the beginning of 2017 to the end of 2021.Findings: In the end, each of the algorithms and the risk measure used were measured and compared with the actual market return. Based on the obtained results, the CVAR risk measure has a better capability and result than the VAR risk measure, and the support vector machine algorithm has also achieved a better performance in choosing the investment portfolio.Originality/ value: This research is optimized in the form of a capital sample by integrating machine learning methods and risk measures. Adding VaR and CVaR risk metrics enhances the decision-making process regarding risk reduction. Forecasting with the help of random forest and using an approach based on fuzzy theory for risk and value analysis gives the research an innovative perspective in portfolio formation. The findings provide investors and researchers with valuable insights in their search for better investment strategies. Manuscript profile
      • Open Access Article

        79 - A new approach using Machine Learning and Deep Learning for the prediction of cancer tumor
        Fatemeh Asgari Arian Minooei Somayeh Abdolahi Reza Shokrani Foroushani Atefeh Ghorbani
      • Open Access Article

        80 - Improving surface roughness in barrel finishing process using supervised machine learning
        Mohammad Sajjad Mahdieh Mehdi Bakhshi Zadeh Amirhossein Zare Reisabadi
      • Open Access Article

        81 - Provide a Strategic Model based on Machine Learning Approach to Automatically Opinion Assessment and Explore Product Information in Digital Marketing
        Alireza AshouriRoudposhti hormoz mehrani Karim Hamdi
        The present study has tried to provide an automated strategic model for classifying and exploring the opinions presented about a particular product, brand or service by using machine learning and survey techniques. Applying such a strategic model can be very effective i More
        The present study has tried to provide an automated strategic model for classifying and exploring the opinions presented about a particular product, brand or service by using machine learning and survey techniques. Applying such a strategic model can be very effective in identifying the characteristics of brands and factor clustering between them and provide very valuable information in this regard. The results of this evaluation can be used in the development of marketing management strategies and quantitative or qualitative improvement of this factor. The model based on machine learning and deep neural network identifies related opinions, measures different characteristics at different levels of evaluation, and automatically categorizes opinions depending on the quality of the presentation. The output of this model is efficiently imported by using marketing capabilities to improve the sales of defined goods / brands / services. The data set used in this study is related to the collection of comments of Persian language users of the online sales site of Digikala and Holokish, which was uploaded as an educational-experimental (70% educational data and 30% experimental data) in three alternate models in order to identify and the classification of the various properties of the goods and services provided in the dataset has been used. The proposed model uses error functions to calculate the amount of computational error to evaluate the capability to be able to provide the degree of deviation from the correct values for its predicted information. For this purpose, the mean square error and the square root mean square error have been used. The results show the high accuracy of the model of evaluations and prediction of different conditions. Manuscript profile
      • Open Access Article

        82 - Evaluation and Prediction of W/C Ratio vs. Compressive Concrete Strength Using A.I and M.L Based on Random Forest Algorithm Approach
        R. Jamalpour
        Concrete, an artificial stone composed of cement, aggregate, water, and additives, is extensively utilized in contemporary civil projects. A pivotal characteristic of concrete is its capacity to efficiently serve various purposes and structural requirements. Cement, wat More
        Concrete, an artificial stone composed of cement, aggregate, water, and additives, is extensively utilized in contemporary civil projects. A pivotal characteristic of concrete is its capacity to efficiently serve various purposes and structural requirements. Cement, water, aggregate, and additives are pivotal parameters wherein even minor alterations can significantly impact concrete strength. Among these parameters, the Water/Cement (W/C) ratio holds particular significance due to its inverse correlation with strength. Traditionally, predicting concrete strength solely based on the water-to-cement ratio has been challenging. However, with advancements in AI and machine learning techniques coupled with ample data availability, accurate strength prediction is achievable. This paper presents an analysis of a diverse dataset comprising various concrete tests utilizing machine learning methodologies, followed by a comparative examination of the outcomes. Furthermore, this study scrutinizes a renowned dataset encompassing 1030 experiments, featuring diverse combinations of cement, water, aggregate, etc., employing artificial intelligence and machine learning techniques. Model accuracy and result fidelity are evaluated through rigorous sampling methodologies. Initially, the dataset is subjected to analysis utilizing the linear regression algorithm, followed by validation employing the random forest algorithm. The random forest algorithm is employed to predict the water-to-cement ratio and corresponding compressive strength for concrete with a density of 300 kg/m3. Notably, the obtained results exhibit a high level of concordance with experimental and laboratory findings from prior studies. Hence, the efficacy of the random forest algorithm in concrete strength prediction is established, offering promising prospects for future applications in this domain. Manuscript profile