• XML

    isc pubmed crossref medra doaj doaj
  • List of Articles


      • Open Access Article

        1 - Designing a hybrid model for classification of imbalanced data in the field of third party insurance
        Mahnaz Manteqipour parisa Rahimkhani
        The major part of Iran's insurance industry portfolio is the field of compulsory civil liability insurance of motor vehicle owners against third parties. Therefore, detecting the behavior of this insurance field will be effective in order to provide better services to t More
        The major part of Iran's insurance industry portfolio is the field of compulsory civil liability insurance of motor vehicle owners against third parties. Therefore, detecting the behavior of this insurance field will be effective in order to provide better services to the customers of the insurance industry. Predicting the claim rates for insurance policies, based on the features saved for each insurance policy, is one of the problems of the insurance industry that can be solved with the help of data mining techniques. Insurance is designed using the law of large numbers. In simpler words, a sufficient number of insurance policies are issued, and a small part of this number of insurance policies deal with claims. From the sum of the issued insurance premiums, the cost of claims will be compensated. Therefore, the insurance industry is faced with imbalanced data. The imbalances of insurance industry data causes many challenges in data classification. In the field of third-party insurance and in the data set of this research, there are 14 features for every policies and the data imbalance ratio is 1 to 0.0092, which is considered severe imbalanced.MethodIn this research, we deal with the classification of severe imbalanced data in the field of third party insurance. To overcome the problem of imbalanced data, two hybrid models with different architectures based on 5 basic Gaussian Bayes models, support vectors, logistic regression, decision tree and nearest neighbor are designed. First proposed hybrid model is using random sampling from whole dataset and applying a resampling method for classification and second one selects samples from each labels separately and apply a classification model on the whole selected data. The results of these models are compared. ResultsThe obtained results show that the proposed hybrid models can predict the occurrence or non-occurrence of traffic accidents better than other data mining algorithms. The popular measures such as precisions and recalls of two proposed hybrid models show that second hybrid model has higher performance. And in ensemble phase, the number of models in simple voting as a hyper parameter can be adjusted based on the company's strategy. Also, the use of decision tree to ensemble basic models to build a combined model provides better results than simple voting of basic models.DiscussionTo do more research on the problem of imbalance data classification more complicated resampling data algorithms could be applied and the results be compared. Manuscript profile
      • Open Access Article

        2 - Comparison of the classification methods in software development effort estimation
        Sadegh Ansaripour Taghi Javdani Gandomani
        Introduction: The main goal of software companies is to provide solutions in various fields to better meet the needs of customers. The process of successful modeling depends on finding the right and accurate requirements. However, the key to successful development for a More
        Introduction: The main goal of software companies is to provide solutions in various fields to better meet the needs of customers. The process of successful modeling depends on finding the right and accurate requirements. However, the key to successful development for adapting and integrating different developed parts is the importance of selecting and prioritizing the requirements that will advance the workflow and ultimately lead to the creation of a quality product. Validation is the key part of the work, which includes techniques that confirm the accuracy of a set of requirements for building a solution that leads to the project's business objectives. Requirements change during the project, and managing these changes is important to ensure the accuracy of the software built for stakeholders. In this research, we will discuss the process of checking and validating the software requirements.Method: Requirement extraction is conducted by means of discovery, review, documentation, and understanding of user needs and limitations of a system. The results are presented in the form of products such as text requirements descriptions, use cases, processing diagrams, and user interface prototypes.Findings: Data mining and recommender systems can be used to increase the necessary needs, however, another method. of social networks and joint filtering can be used to create requirements for large projects to identify needs.Discussion: In the area of ​​product development, requirements engineering approaches focus exclusively on requirement development. There are challenges in the development process due to the existence of human resources. If the challenges are not seen well at this stage, it will be extremely expensive after the software production. Therefore, in this regard, errors should be minimized and they should be identified and corrected as soon as possible. Now, with the investigations carried out, one of the key issues in the field of requirements is the discussion of validation, which first confirms that the requirements are able to be implemented in a set of characteristics according to the system description, and secondly, a set of essential characteristics. such as complete, consistent, according to standard criteria, non-contradiction of requirements, absence of technical errors, and lack of ambiguity in requirements. In fact, the purpose of validation is to ensure the result that a sustainable and renewable product is created according to the requirements.  Manuscript profile
      • Open Access Article

        3 - Developing a strategy for the use of the Internet of Things in Shiraz residential buildings with a combined DANP-SWOT approach
        reza tahmasebi ardalan feili
        Introduction: In this research, by using the combined DANP-SWOT approach, strategies for the use of the Internet of Things in residential buildings in Shiraz were discussed. A list of 21 factors in four clusters of strengths, weaknesses, opportunities, and threats was c More
        Introduction: In this research, by using the combined DANP-SWOT approach, strategies for the use of the Internet of Things in residential buildings in Shiraz were discussed. A list of 21 factors in four clusters of strengths, weaknesses, opportunities, and threats was compiled based on the literature review.Method: By having the points and drawing the SPACE diagram, the strengths of the research were superior to its weaknesses, and the opportunities ahead were superior to the environmental threats, so we reached the strategic offensive position (SO)Results: Finally, nine offensive strategies for the application of the Internet of Things in the residential buildings of Shiraz City were explained. Also, based on the points extracted based on the DANP-SWOT combined approach, the factors of quality of life, sustainable development, and cost reduction were the most important factors in the strength cluster. Lack of knowledge and privacy security were two of the most important factors in the weakness cluster. The factors of demand and economic growth were the most important factors in the opportunity cluster, and finally, the factors of cultural challenge, laws, and standards were the most important factors in the threat cluster.Discussion: By determining the priority of the research factors, suggestions for education and awareness for end consumers were made, and solutions such as banking facilities and tax exemptions were proposed in order to standardize and facilitate laws to encourage people to use the Internet of Things in smart residential homes. Manuscript profile
      • Open Access Article

        4 - Comparison of Linear and Non-linear Support Vector Machine Method with Linear Regression for Short-term Prediction of Queue Length Parameter and Arrival Volume of Intersection Approach for Adaptive Control of Individual Traffic Lights
        mohammad ali kooshan moghadam Mehdi Fallah Tafti
        IntroductionThis study was carried out in line with the development of adaptive traffic signal control systems to provide a better traffic control at intersections. In this approach, if the predicted data related to the future cycles are used to optimize the upcoming sc More
        IntroductionThis study was carried out in line with the development of adaptive traffic signal control systems to provide a better traffic control at intersections. In this approach, if the predicted data related to the future cycles are used to optimize the upcoming schedule, it will control the traffic in unforeseen cases and manage it before reaching the forthcoming cycles. In order to have enough data to create such a model, the required data from two intersections in Yazd city were collected and these intersections were simulated using AIMSUN software. Then these intersections were calibrated and validated for existing conditions. The prediction accuracy results were extracted by the proposed methods and compared with the linear regression method. RMSE, MAE and GEH errors were used to compare the methods.Method: The predicted queue length and arrival volume parameters for any entry approach of itersections are major variables required during the adaptive signal control process,  Hence, Linear and Non-linear Support Vector Regression Methods combined with the time series method were used to predict these parameters. For comparison of the performance of these models with a conventional model, Linear Regression models were also developed for the prediction of these parameters.ResultsFor the developed model based on combined Linear Support Vector Regression and the time series methods, the number of optimal previous cycle data used in the model was measured as 6 and 2 previous data cycles for predicting the arrival volume at Pajuhesh and Seyed Hassan Nasrollah intersections, respectively. The optimal number of previous data used in the model was measured as 9 and 11 previous data cycles for predicting the queue length at Pajuhesh and Seyed Hassan Nasrollah intersections, respectively. Also, using the combined Non-Linear Support Vector Regression and the time series methods, the number of optimal previous data cycles was obtained as 8 and 2 cycles in predicting the arrival volume at Pajuhesh and Seyed Hassan Nasrollah intersections, and the number of optimal previous data cycles was obtained as 7 and 7 cycles in predicting the queue length at Pajuhesh and Seyed Hassan Nasrollah intersections.Discussion: The results of RMSE, MAE and GEH measures were used to compare the performance of the developed models with the real data. This comparison indicated that the model based on the combined Non-Linear Support Vector Regression and time series methods, has produced the best performance in predicting traffic arrival volume than the other aforementioned models. However, in terms of predicting the queue length, this model produced a better performance than the combined Linear Support Vector Regression at only one of the intersections. The Linear Regression model produced the weakest performance in all comparisons. Thus, it can be concluded that the combined Support Vector Regression and time series methods are appropriate tools in predicting traffic parameters in these situations. Manuscript profile
      • Open Access Article

        5 - A new method in the security of encryption systems by unbalanced gates
        seyyed hamidreza mousavi mehdi safaeian Amir Hassan Ahmadi Ghaleh
        IntroductionNowadays, sharing information in communication systems and computers demands high levels of security. Side channel attacks are mainly considered as a main challenge in cryptographic systems which they are used as attacking techniques to break encrypted devic More
        IntroductionNowadays, sharing information in communication systems and computers demands high levels of security. Side channel attacks are mainly considered as a main challenge in cryptographic systems which they are used as attacking techniques to break encrypted devices such as smart cards. The purpose of this research is introducing a new plan for strengthening on-chip encryption algorithms. The proposed plan is based on using Phase-Locked Loop (PLL) and enhanced XOR gate in Advanced Encryption Standard (AES) algorithm. In this approach, by disturbing the power consumption and time of execution for each different round of the algorithm, the encryption algorithm is protected against Differential Power Attacks (DPA). The proposed method has been implemented in TSMC 65nm technology in Cadence and the results show that the algorithm becomes immune against DPA using this method. As overheads, the silicon area and power consumption increased about 33% and 25%, respectively, whereas, the clock rate has been reduced less than 3%. MethodIn modern digital systems, if the data in the systems carries classified information, data encryption is unavoidable. For example, encryption in smart cards, portable electronic devices, mobile phones and remote control devices use encryption systems to deal with unauthorized intruders [1][2]. One of the requirements of today's electronic systems is high speed, low power consumption and information security. The basis of this method is the combination of the two characteristics of delay and power noise injection into the system using gates,ResultsThe comparison of the results in the simulation mode showed that the system has a good resistance against DPA attacksOne of the characteristics that exist to check the ability of retrofitting methods is the amount of hardware overhead and the imposition of additional power in the proposed retrofitting method. To check this issue, the hardware overhead and power consumption of the implemented method are presented in Table (2).DiscussionWith a reasonable number of power diagrams, so that compared to In the previous designs, the number of power diagrams has been almost doubled and the only overhead cost of the system is the increase in the volume of the occupied space by 33% and the power consumption by 20%. Manuscript profile
      • Open Access Article

        6 - Provide a new approach to identify and detect credit card fraud using ANN - ICA
        Javad Balaee kodehi Mohammad Tahghighi Sharabyan
        Introduction: The imperialistic competition algorithm is a method in the field of evolutionary computing that deals with finding the optimal answer to various optimization problems. This algorithm provides an algorithm for solving mathematical optimization problems by m More
        Introduction: The imperialistic competition algorithm is a method in the field of evolutionary computing that deals with finding the optimal answer to various optimization problems. This algorithm provides an algorithm for solving mathematical optimization problems by mathematical modeling the socio-political evolution process. The imperialistic competition algorithm forms an initial set of possible answers. These initial answers are known as chromosomes in the genetic algorithm, particles in the particle swarm algorithm, and countries in the imperialistic competition algorithm. The imperialistic competition algorithm gradually improves these initial solutions (countries) with a special process that follows and finally provides the appropriate solution to the optimization problem. By imitating the process of the social, economic, and political evolution of countries and by mathematically modeling parts of this process, this algorithm presents operators in a regular form as an algorithm that can help solve complex optimization problems. In fact, this algorithm looks at the solutions of the optimization problem in the form of countries and tries to gradually improve these solutions during an iterative process and finally reach the optimal solution of the problem.Method: The proposed algorithm of this article (combined algorithm of neural network and colonial competition) has used the social-political process of the imperialistic competition algorithm with mathematical modeling in order to provide a strong and efficient algorithm in the field of diagnosis optimization.Findings: Our experiments proved that neural data classification using the transaction rejection option can lead us to a very low error rate, while we are looking for a very high detection rate. In this study, we reached an accuracy rate of 98.54, which is a higher accuracy rate compared to previous methods.Discussion: In this research, credit card fraud detection has been done with the aim of identifying the fraud rate, increasing the accuracy, and applying the lowest system error rate using neural networks and combining it with the colonial competition algorithm. Also, effective features were extracted in the evaluation of fraud detection. It can be concluded that the proposed classification system can have a very high detection performance in credit card financial transactions. Manuscript profile