• XML

    isc pubmed crossref medra doaj doaj
  • List of Articles


      • Open Access Article

        1 - A Comparative Study of Open-Source Software for Deployment and Management of Cloud Computing Utilizing a Big Data Processing Quality Model
        Mahdi Jafari Amir Kalbasi
        Introduction: The volume of data produced by human society is growing rapidly. Data is being produced in many different industries such as manufacturing, transportation, healthcare, and social networks. Due to the volume of data being produced, data storage and processi More
        Introduction: The volume of data produced by human society is growing rapidly. Data is being produced in many different industries such as manufacturing, transportation, healthcare, and social networks. Due to the volume of data being produced, data storage and processing are among the most important issues when dealing with big data. The main challenges when dealing with big data are data storage and management, data processing and analytics, and resource management to provide the infrastructure needed to support the first two mentioned challenges.  Cloud computing, due to its features and architecture, is a promising infrastructure to store and process big data. Different cloud computing deployment models exist, namely, public cloud, private cloud, community cloud, and hybrid cloud. To store and process big data in a cloud environment, individuals and organizations may be more inclined to deploy and manage private clouds to gain greater control and access to resources and their data. Numerous open-source software has been developed for the deployment and management of private clouds. Evaluating and choosing among them is a challenging task, especially for those who are new to these large-scale software systems. Furthermore, due to the continuous delivery of new releases with major changes or new features and modules for each of the cloud infrastructure management software, choosing among them could be a challenge even for an experienced user.Method: In this paper, first of all, we provide the Quality Model for Cloud Infrastructure (QMCI) for evaluation of cloud infrastructure management software. QMCI focuses on quality factors that are important when processing big data. The top-level factors of this model are 1- Functionality 2- Usability 3- Reliability 4- Supportability 5- Performance. The top-level factors are then divided into sub-factors to further refine the quality model. Metrics can be considered for the sub-factors to evaluate a cloud infrastructure management software.Discussion: Based on QMCI, multiple-criteria decision-making can be utilized to choose between cloud infrastructure management software that best suits a given set of criteria. In the remaining of this paper, three of the most popular open-source cloud infrastructure management software, namely, Eucalyptus, OpenStack, and Apache Cloud Stack are evaluated based on QMCI to compare their capabilities, weaknesses, and strengths from big data processing perspective. Previous literatures that considered the selected three cloud infrastructure management software were studies and utilized to perform the comparative study. Manuscript profile
      • Open Access Article

        2 - Definition of Bus Priority Vector to Solve Distribution Load Flow for Radial Networks using MATLAB
        Peyman Nazarian
        Introduction:  Load distribution analysis is a fundamental and basic study for all power networks, including distribution networks, which are used in steady-state conditions. Power system planning and operation, power network reorganization, and many optimization s More
        Introduction:  Load distribution analysis is a fundamental and basic study for all power networks, including distribution networks, which are used in steady-state conditions. Power system planning and operation, power network reorganization, and many optimization studies require a large number of load distribution calculations in normal and emergency situations. Due to the nonlinearity of the system of load distribution equations, it is necessary to use iterative solution methods to solve it. In addition, due to a large number of power grid buses, matrix algebra is used. Some of these applications require fast iterative solutions of load distribution and therefore it is very important that load distribution analysis is performed efficiently. A number of load distribution algorithms are specifically designed for distribution systems. One of these methods considers bus voltages as state variables and works based on an iterative algorithm and uses special methods to increase convergence.Method: The proposed algorithm of this paper called SDLF in this article does not need special matrices and complex programming. In this method, load distribution can be achieved easily with only a simple vector that shows the priority of buses, which we call BPV, and by using the forward-backward sweeper algorithm. It is worth mentioning that the BPV vector itself is extracted from the network topology. This work can be done both visually from the electrical diagram of the network and from the data matrix of network D by MATLAB software. Finding: In terms of the convergence of the solution, increasing the repetition has led to an increase in accuracy, and as a result, the 10th repetition has been chosen as a relatively accurate result to confirm the validity of the method, considering the required engineering precision. The obtained results show that even in the first iteration, an acceptable accuracy for the voltage range has been obtained. In steady state analysis of power networks, the voltage amplitude is more important than the voltage phase. The noteworthy point is that in calculating the voltage phase, it is not necessary to update its value in each step, and it is enough to calculate the voltage phase after obtaining the voltage ranges. Conclusion: In this article, a new method called SDLF is introduced to study the load distribution of distribution networks. The results of the implementation of the proposed method, with an acceptable engineering error, can be used in the common applications of power networks in the first iteration, and based on this, it can be used as an online load distribution in SCADA systems. The effectiveness of the method was checked on the test network of 33 IEEE buses in the text of the article and its validity was confirmed. The introduction of the BPV bus priority vector made it possible to avoid using complex matrices and additional calculations and to reduce the time of load distribution calculations. Manuscript profile
      • Open Access Article

        3 - A blind and robust video watermarking method based on hybrid 3-D transform
        SHAHROKH FALLAH TORBEHBAR ّFarzad Zargari
        Introduction: Digital images and videos can be copied, reproduced, and distributed with the same quality as the original ones, and this violates the copyright of original producers and the distributors. As a result, embedding information about the original producer and More
        Introduction: Digital images and videos can be copied, reproduced, and distributed with the same quality as the original ones, and this violates the copyright of original producers and the distributors. As a result, embedding information about the original producer and distributor in digital images and video attracted great attention for digital right management. Watermarking provides the facility to embed the required information in images and videos. Robust watermarking is used for embedding authentication information and hence should be robust against various attacks. On the other hand, in fragile watermarking the embedded data should be destroyed by any alteration in the watermarked image or video. Reversible digital watermarking techniques are proposed for lossless restoration of the original image from the watermarked image. Watermarking is non-blind when a copy of the signature or other related information is required for extracting the signature from the watermarked image or video and is blind when the signature can be extracted from watermarked data without any other subsidiary information. Watermarking the signature in a group of successive frames of a video file by the use of 3-D transforms attracted attention because it makes the watermarked video more robust against attacks such as frame averaging and alteration, and watermarking by 3-D transform is employed as a solution to the problems caused by independent watermarking of signature in one or several frames.Methods: Contourlet transform offers a high degree of directionality and anisotropy besides the multi-scale and time-frequency localization properties in wavelet transform. As a result, Contourlet transforms the representation of curved edges in the images with smoother contour and fewer coefficients compared to the wavelet transform. In this paper, a blind robust watermarking method based on a hybrid 3-D transform is proposed. The hybrid 3-D transform is derived by employing the 2-D Contourlet transform along with the 1-D wavelet transform. The signature will be watermarked in the low-frequency sub-band derived from the third level transform. To watermark the signature, we save a modified copy of the high energy coefficients of the even part in the odd part. For signature extraction, the watermarked region is partitioned into odd and even columns. The 3-level 3-D is applied to odd and even parts to transform coefficients. The high-energy sub-bands in odd and even parts are separated to extract the signature.Results: Experimental results indicate low degradation of the quality of the watermarked video, along with high robustness of the watermarked video against common attacks in comparison with other tested blind video watermarking methods.Discussion: A comparison of the proposed method with other methods indicates the superior performance of the proposed method in most of the attacks. Manuscript profile
      • Open Access Article

        4 - A new algorithm for data clustering using combination of genetic and Fireflies algorithms
        Mahsa Afsardeir mansoure Afsardeir
        Introduction: With the progress of technology and increasing the volume of data in databases, the demand for fast and accurate discovery and extraction of databases has increased. Clustering is one of the data mining approaches that is proposed to analyze and interpret More
        Introduction: With the progress of technology and increasing the volume of data in databases, the demand for fast and accurate discovery and extraction of databases has increased. Clustering is one of the data mining approaches that is proposed to analyze and interpret data by exploring the structures using similarities or differences. One of the most widely used clustering methods is the k-means. In this algorithm, cluster centers are randomly selected and each object is assigned to a cluster that has maximum similarity to the center of that cluster. Therefore, this algorithm is not suitable for outlier data since this data easily changes centers and may produce undesirable results. Therefore, by using optimization methods to find the best cluster centers, the performance of this algorithm can be significantly improved. The idea of combining firefly and genetics algorithms to optimize clustering accuracy is an innovation that has not been used before.Method: In order to optimize k-means clustering, in this paper, the combined method of genetic algorithm and firefly worm is introduced as the firefly genetic algorithm.Findings: The proposed algorithm is evaluated using three well-known datasets, namely, Breast Cancer, Iris, and Glass. It is clear from the results that the proposed algorithm provides better results in all three datasets. The results confirm that the distance between clusters is much less than the compared approaches.Discussion and Conclusion: The most important issue in clustering is to correctly determine the cluster centers. There are a variety of methods and algorithms that performs clustering with different performance. In this paper, based on firefly metaheuristic algorithms and genetic algorithms a new method has been proposed for data clustering. Our main focus in this study was on two determining factors, namely the distance within the data cluster (distance of each data to the center of the cluster) and the distance that the headers have from each other (maximum distance between the centers of the clusters). In the k-means algorithm, clustering is not accurate since the cluster centers are selected randomly. Employing firefly algorithms and genetics, we try to obtain more accurate centers of the clusters and, as a result, correct clustering. Manuscript profile
      • Open Access Article

        5 - Providing a Solution Based on Fuzzy Logic to Reduce False Positive Alarms in The Intrusion Detection System
        Mohammad Akhlaghpour
        Introduction: The intrusion detection system is responsible for identifying and detecting unauthorized external use of the system that is misused or damaged by internal users. Therefore, the intrusion detection system is created in the form of software and hardware, eac More
        Introduction: The intrusion detection system is responsible for identifying and detecting unauthorized external use of the system that is misused or damaged by internal users. Therefore, the intrusion detection system is created in the form of software and hardware, each of which has its own advantages and disadvantages. The speed and accuracy of the hardware system and the failure of their security by intruders are other features of such systems. If the software related to intrusion detection, acceptability, and the difference between different operating systems are used, they give more generality to the software systems. More suitable software systems are chosen.Method: The behavior of the intrusion detection system is discussed in opposition to various intrusion methods, and in order to deal with intrusion into the system and computer networks, several methods have been created under the name of intrusion detection, which monitors the events that have occurred in a system and into computer networks.Results: the performance of the intrusion detection system is presented in order to influence the behavior of the abuse detection system as well as anomaly detection using fuzzy logic based on an alpha device. The obtained results showed the accuracy rate up to 91.26% and the detection of false alarms up to 90.96%.Discussion: An Intrusion detection system is essential as the first line of defense for the network. Many algorithms depend on the quality of the data set provided for intrusion detection. Of course, in recent developments in knowledge data collection access systems, there has been an increase in interest in data-driven approaches to curb the increase in control system cyber-attacks related to false alarms. Most machine learning-based intrusion detection systems rely on web applications/operating systems or network layers to detect targeted attacks by host or network. Nevertheless, there is still a lack of sufficient research in the evaluation and collection of intrusion detection system datasets for false alarm behaviors, which requires further studies in this field. Manuscript profile
      • Open Access Article

        6 - Verification and Validation for Software Requirements
        Hamidreza Mokhtari Nasser Modiri
        Introduction: The main goal of software companies is to provide solutions in various fields to better meet the needs of customers. The process of successful modeling depends on finding the right and accurate requirements. However, the key to successful development for a More
        Introduction: The main goal of software companies is to provide solutions in various fields to better meet the needs of customers. The process of successful modeling depends on finding the right and accurate requirements. However, the key to successful development for adapting and integrating different developed parts is the importance of selecting and prioritizing the requirements that will advance the workflow and ultimately lead to the creation of a quality product. Validation is the key part of the work, which includes techniques that confirm the accuracy of a set of requirements for building a solution that leads to the project's business objectives. Requirements change during the project, and managing these changes is important to ensure the accuracy of the software built for stakeholders. In this research, we will discuss the process of checking and validating the software requirements.Method: Requirement extraction is conducted by means of discovery, review, documentation, and understanding of user needs and limitations of a system. The results are presented in the form of products such as text requirements descriptions, use cases, processing diagrams, and user interface prototypes.Findings: Data mining and recommender systems can be used to increase the necessary needs, however, another method of social networks and joint filtering can be used to create requirements for large projects to identify needs.Discussion: In the area of ​​product development, requirements engineering approaches focus exclusively on requirement development. There are challenges in the development process due to the existence of human resources. If the challenges are not seen well at this stage, it will be extremely expensive after the software production. Therefore, in this regard, errors should be minimized and they should be identified and corrected as soon as possible. Now, with the investigations carried out, one of the key issues in the field of requirements is the discussion of validation, which first confirms that the requirements are able to be implemented in a set of characteristics according to the system description, and secondly, a set of essential characteristics. such as complete, consistent, according to standard criteria, non-contradiction of requirements, absence of technical errors, and lack of ambiguity in requirements. In fact, the purpose of validation is to ensure the result that a sustainable and renewable product is created according to the requirements. Manuscript profile