• XML

    isc pubmed crossref medra doaj doaj
  • List of Articles


      • Open Access Article

        1 - An Improved SSPCO Optimization Algorithm for Solve of the Clustering Problem
        Rohollah Omidvar Amin Eskandari Narjes Heydari Fatemeh Hemmat Mohammad Feyli
        Swarm Intelligence (SI) is an innovative artificial intelligence technique for solving complex optimization problems. Data clustering is the process of grouping data into a number of clusters. The goal of data clustering is to make the data in the same cluster share a h More
        Swarm Intelligence (SI) is an innovative artificial intelligence technique for solving complex optimization problems. Data clustering is the process of grouping data into a number of clusters. The goal of data clustering is to make the data in the same cluster share a high degree of similarity while being very dissimilar to data from other clusters. Clustering algorithms have been applied to a wide range of problems, such as data mining, data analysis, pattern recognition, and image segmentation. Clustering is a widespread data analysis and data mining technique in many fields of study such as engineering, medicine, biology and the like. The aim of clustering is to collect data points. SSPCO optimization algorithm is a new optimization algorithm that is inspired by the behavior of a type of bird called see-see partridge. One of the things that smart algorithms are applied to solve is the problem of clustering. Clustering is employed as a powerful tool in many data mining applications, data analysis, and data compression in order to group data on the number of clusters (groups). In the present article, an improved chaotic SSPCO algorithm is utilized for clustering data on different benchmarks and datasets; moreover, clustering with artificial bee colony algorithm and particle mass 9 clustering technique is compared. Clustering tests on 13 datasets from UCI machine learning repository have been done. The results show that clustering SSPCO algorithm is a clustering technique which is very efficient in clustering multivariate data. Manuscript profile
      • Open Access Article

        2 - A Novel Caching Strategy in Video-on-Demand (VoD) Peer-to-Peer (P2P) Networks Based on Complex Network Theory
        Venus Marza Amir. H JadidiNejad
        The popularity of video-on-demand (VoD) streaming has grown dramatically over the World Wide Web. Most users in VoD P2P networks have to wait a long time in order to access their requesting videos. Therefore, reducing waiting time to access videos is the main challenge More
        The popularity of video-on-demand (VoD) streaming has grown dramatically over the World Wide Web. Most users in VoD P2P networks have to wait a long time in order to access their requesting videos. Therefore, reducing waiting time to access videos is the main challenge for VoD P2P networks. In this paper, we propose a novel algorithm for caching video based on peers' priority and video's popularity distribution. The proposed mechanism has been evaluated on two different kinds of topology, Erdos-Renyi Model and Barabasi-Albert Model. It's necessary to mention that scale-free topologies are much more similar to P2P networks like Internet; so it’s closer to reality much more. However, decreasing waiting time is more tangible in them too. The results demonstrate that how our caching mechanism can reduce delay, improve bandwidth consumption, and decrease transport costs. Finally we came to the conclusion that increasing networks' size and videos' chunks has led to decrease much more delay by using proposed algorithm. Manuscript profile
      • Open Access Article

        3 - An Improved Flower Pollination Algorithm with AdaBoost Algorithm for Feature Selection in Text Documents Classification
        Hiwa Majidpour Farhad Soleimanian Gharehchopogh
        In recent years, production of text documents has seen an exponential growth, which is the reason why their proper classification seems necessary for better access. One of the main problems of classifying text documents is working in high-dimensional feature space. Feat More
        In recent years, production of text documents has seen an exponential growth, which is the reason why their proper classification seems necessary for better access. One of the main problems of classifying text documents is working in high-dimensional feature space. Feature Selection (FS) is one of the ways to reduce the number of text attributes. So, working with a great bulk of the feature space without FS increases the computational cost which is a function of the length of the vector, and also, it helps to remove irrelevant attributes. The general approach in this paper combines the hybrid of Flower Pollination Algorithm (FPA) with Ada-Boost algorithm. The FPA is used for FS and the Ada-Boost is used for classification of text documents. Tests were conducted on Reuters-21578, WEBKB and CADE 12 datasets. The results show that the hybrid model has higher detection accuracy in FS compared with Ada-Boost algorithm with model. And comparisons are indicative of higher detection accuracy of the proposed model compared with KNN-K-Means, NB-K-Means and learning models. Manuscript profile
      • Open Access Article

        4 - An Improved Algorithmic Method for Software Development Effort Estimation
        Elham Khatibi Vahid Khatibi Bardsiri
        Accurate estimating is one of the most important activities in the field of software project management. Different aspects of software projects must be estimated among which time and effort are of significant importance to efficient project planning. Due to complexity o More
        Accurate estimating is one of the most important activities in the field of software project management. Different aspects of software projects must be estimated among which time and effort are of significant importance to efficient project planning. Due to complexity of software projects and lack of information at the early stages of project, reliable effort estimation is a challenging issue. In this paper, a hybrid model is proposed to estimate the effort of software projects. The proposed model is a combination of particle swarm optimization algorithm and a linear regression method in which coefficient finding is optimally performed. Moreover, the estimation equation is adjusted using project size metric so that the most accurate estimate is achieved. A relatively real large data set is employed to evaluate the performance of the proposed model and the results are compared with other models. The obtained results showed that the proposed hybrid model can improve the accuracy of estimates. Manuscript profile
      • Open Access Article

        5 - The Introduction of a Heuristic Mutation Operator to Strengthen the Discovery Component of XCS
        Ahmad Reza Pakraei Kamal Mirzaie
        The extended classifier systems (XCS) by producing a set of rules is (classifier) trying to solve learning problems as online. XCS is a rather complex combination of genetic algorithm and reinforcement learning that using genetic algorithm tries to discover the encourag More
        The extended classifier systems (XCS) by producing a set of rules is (classifier) trying to solve learning problems as online. XCS is a rather complex combination of genetic algorithm and reinforcement learning that using genetic algorithm tries to discover the encouraging rules and value them by reinforcement learning. Among the important factors in the performance of XCS is the possibility to discover rules that are not only general as possible but highly Accurate. In this paper, a new mutation operator is introduced for XCS that in addition to increasing the speed of learning, will help improve performance. The purpose of speed is the amount of time that takes for the system to reach an appropriate solution and the purpose of the performance is the quality of solution that has been developed. The proposed algorithm was named XCS-KF and to evaluate its performance, it is used to solve the common problem in this area that is known as the multiplexer. The results obtained showed that the speed and performance of the proposed algorithm to XCS algorithm increased significantly. Manuscript profile
      • Open Access Article

        6 - Modeling Ghotour-Chai River’s Rainfall-Runoff process by Genetic Programming
        Mina Ruhnavaz Abdolreza Hatamlou
        Considering the importance of water and computing the amount of rainfall runoff resulted from precipitation in recent decades, using appropriate methods for predicting the amount of runoff from rainfall date has been really essential. Rainfall-runoff models are used to More
        Considering the importance of water and computing the amount of rainfall runoff resulted from precipitation in recent decades, using appropriate methods for predicting the amount of runoff from rainfall date has been really essential. Rainfall-runoff models are used to estimate runoff generated from precipitation in the catchment area. Rainfall-runoff process is totally a non-linear phenomenon. In the present study, it has been tried to Model Ghatoor-Chai River rainfall-runoff, one of the studying sub-basins of Aras River with an area of 8544 square kilometers, by genetic programming and to analyze the results. In this study, the statistical data from Ghatoor-Chai’s daily rainfall-runoff, Marakan hydrometric station during the period 1386-1390 has been used. Data from events during the period 1386-1389 is used for training and data from 1390 for testing. In this modeling, 8 input models have been defined for the system. After applying input models in system, the results based on statistical measures of root-mean-square error and correlation coefficient were analyzed and evaluated. The findings show the success of genetic programming for rainfall-runoff process and this procedure can be suggested as a way for modeling this process. Manuscript profile
      • Open Access Article

        7 - Search Based Weighted Multi-Bit Flipping Algorithm for High-Performance Low-Complexity Decoding of LDPC Codes
        Ehsan Olyaei Torshizi Mohammad Amir Nazari Siahsar Ali Akbar Khazaei Hossein Sharifi
        In this paper, two new hybrid algorithms are proposed for decoding Low Density Parity Check (LDPC) codes. Original version of the proposed algorithms named Search Based Weighted Multi Bit Flipping (SWMBF). The main idea of these algorithms is flipping variable multi bit More
        In this paper, two new hybrid algorithms are proposed for decoding Low Density Parity Check (LDPC) codes. Original version of the proposed algorithms named Search Based Weighted Multi Bit Flipping (SWMBF). The main idea of these algorithms is flipping variable multi bits in each iteration, change in which leads to the syndrome vector with least hamming weight. To achieve this, the proposed algorithms do multi-dimensional searching between all possible bit position(s) that could flip in each iteration to select the best choices. It goes without saying that each iterative decoding algorithm provides a distinct trade-off between complexity and performance. SBWMBF algorithm, which while having the capacity for flipping several bits per iteration, offers a faster convergence rate and less hardware complexity compared to the modified WBF algorithm and the other version of hybrid algorithms. Then, in order to simplicity and reduction in run time of original version, we have introduced a simplified version that is new and highly efficient algorithm with an acceptable performance compared with the BP algorithm and less complexity and fewer iterations required. Simulation results, when compared to other known decoding algorithms, illustrate that the proposed algorithms converge significantly faster and have a tangible reduction in iteration number and computational complexity and also have superior performance but with little performance penalty than the robust BP algorithm. Manuscript profile
      • Open Access Article

        8 - Novel Hybrid Fuzzy-Evolutionary Algorithms for Optimization of a Fuzzy Expert System Applied to Dust Phenomenon Forecasting Problem
        Somayeh Ghanbari Rahil Hosseini Mahdi Mazinani
        Nowadays, dust phenomenon is one of the important challenges in warm and dry areas. Forecasting the phenomenon before its occurrence helps to take precautionary steps to prevent its consequences. Fuzzy expert systems capabilities have been taken into account to assist a More
        Nowadays, dust phenomenon is one of the important challenges in warm and dry areas. Forecasting the phenomenon before its occurrence helps to take precautionary steps to prevent its consequences. Fuzzy expert systems capabilities have been taken into account to assist and cope with the uncertainty associated to complex environments such as dust forecasting problem. This paper presents novel hybrid Fuzzy-Evolutionary algorithms to predict the dust phenomenon. For this, first a fuzzy expert system was designed and then it was optimized using evolutionary algorithms like Genetic and Differential Evolutionary algorithms. Evolutionary nature of these algorithms have been taken into account to optimize the fuzzy system in the complex area of the dust phenomenon. To evaluate the proposed hybrid models a real dataset including 55 years of the dust phenomenon in Zanjan province in Iran was considered. Performance of these methods was investigated through an ROC curve analysis in combination with a 10-fold cross validation technique. The accuracy of the fuzzy expert system was 92.13% and after optimization through the Fuzzy-Genetic model and hybrid differential evolutionary model was reached to 93.5% and 97.30%, respectively. The results are promising for early forecasting of the dust phenomena and preventing its consequences. Manuscript profile