A New Architecture Based on Artificial Neural Network and PSO Algorithm for Estimating Software Development Effort
Subject Areas : Project ManagementAmin Moradbeiky 1 , Amid Khatibi Bardsiri 2
1 - Islamic Azad University, Kerman Branch, Kerman, Iran
2 - 1Islamic Azad University, Kerman Branch, Kerman, Iran
Keywords: Nerual Networks, Development effort estimation, software project, Particle swarm optimization,
Abstract :
Software project management has always faced challenges that have often had a great impact on the outcome of projects in future. For this, Managers of software projects always seek solutions against challenges. The implementation of unguaranteed approaches or mere personal experiences by managers does not necessarily suffice for solving the problems. Therefore, the management area of software projects requires tools and means helping software project managers confront with challenges. The estimation of effort required for software development is among such important challenges. In this study, a neural-network-based architecture has been proposed that makes use of PSO algorithm to increase its accuracy in estimating software development effort. The architecture suggested here has been tested by several datasets. Furthermore, similar experiments were done on the datasets using various widely used methods in estimating software development. The results showed the accuracy of the proposed model. The results of this research have applications for researchers of software engineering and data mining.
[1] The Standish Group, “Chaos Report,” Technical report, http://www.standishgroup.com, 2009.
[2] Nelson, E.A. Management Handbook for the Estimation of Computer Programming Costs. System Developer Corp., 1966.
[3] Boehm, B. Software Engineering Economics. Prentice Hall, 1981.
[4] Boehm, B., Madachy, R. and Steece, B. Software Cost Estimation with Cocomo II. Prentice Hall, 2000.
[5] Putnam, L.H. “A General Empirical Solution to the Macro Software Sizing and Estimation Problem,” IEEE Trans. Software Eng., vol. 4, no. 4, pp. 345-361, July 1978.
[6] Albrecht, A.J. and Gaffney, J-E. “Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation,” IEEE Trans. Software Eng., vol. 9, no. 6, pp. 639-648, Nov. 1983.
[7] Finnie, G., Wittig, G. and Desharnais, J-M. “A Comparison of Software Effort Estimation Techniques: Using Function Points with Neural Networks, Case-Based Reasoning and Regression Models,” J. Systems and Software, vol. 39, pp. 281-289, 1997.
[8] Sentas, P., Angelis, L., Stamelos, I. and Bleris, G. “Software Productivity and Effort Prediction with Ordinal Regression,” Information and Software Technology, vol. 47, pp. 17-29, 2005.
[9] Briand, L., Emam, K.E., Surmann, D. and Wieczorek, I. “An Assessment and Comparison of Common Software Cost Estimation Modeling Techniques,” Proc. 21st Int’l Conf. Software Eng., pp. 313-323, May 1999.
[10] Briand, L., Langley, T. and Wieczorek, I. “A Replicated Assessment and Comparison of Common Software Cost Modeling Techniques,” Proc. 22nd Int’l Conf. Software Eng., pp. 377-386, June 2000.
[11] Shepperd, M., Schofield, C. (1997) Estimating software project effort using analogies. IEEE Trans Softw Eng 23 (11):736–743
[12] Angelis, L., Stamelos, I. (2000) A simulation tool for efficient analogy based cost estimation. Empir Softw Eng 5(1):35–68
[13] Chiu, N-H., Huang, S-J. (2007) The adjusted analogy-based software effort estimation based on similarity distances. J Syst Softw 80(4):628–640
[14] Gupta, S., Sikka, G., Verma, H. (2011) Recent methods for software effort estimation by analogy. SIGSOFT Softw Eng Notes 36(4):1–5
[15] Kocaguneli, E., Menzies, T., Bener, A. and Keung, JW. (2012) Exploiting the essential assumptions of analogy-based effort estimation. IEEE Trans Softw Eng 38(2):425–438
[16] Milios, D., Stamelos, I. and Chatzibagias, C. (2011) Global optimization of analogy-based software cost estimation with genetic algorithms, artificial intelligence applications and innovations. L. Iliadis, I. Maglogiannis and H. Papadopoulos, Springer Boston. 364:350–359
[17] Kocaguneli, E. and Menzies, T. (2013), “Software effort models should be assessed via leave-one-out validation", The Journal of Systems and Software, 1879-1890.
5
Journal of Advances in Computer Engineering and Technology
A New Architecture Based on Artificial Neural Network and PSO Algorithm for Estimating Software Development Effort
-
Abstract— Software project management has always faced challenges that have often had a great impact on the outcome of projects in future. For this, Managers of software projects always seek solutions against challenges. The implementation of unguaranteed approaches or mere personal experiences by managers does not necessarily suffice for solving the problems. Therefore, the management area of software projects requires tools and means helping software project managers confront with challenges. The estimation of effort required for software development is among such important challenges. In this study, a neural-network-based architecture has been proposed that makes use of PSO algorithm to increase its accuracy in estimating software development effort. The architecture suggested here has been tested by several datasets. Furthermore, similar experiments were done on the datasets using various widely used methods in estimating software development. The results showed the accuracy of the proposed model. The results of this research have applications for researchers of software engineering and data mining.
I. INTRODUCTION
D
ue to the intangible nature of software, software companies often have difficulty in estimating the effort required to complete software projects [17]. Software project managers have always tried, in one way or another, to direct and respond to challenges facing software projects. In this regard, utilizing devices that would enable the managers of these projects to predict the forthcoming situations of projects or to assess the impact of decisions on the future of a project has been of special interest to researchers. Such instruments can play an important role in better understanding the future conditions of projects, and they usually operate in algorithmic or non-algorithmic ways. Algorithmic methods are neatly formulated and work with a specific framework. Regression-based approaches and COCOMO method are among methods included in this group. Non-algorithmic methods belong to another group and they work in a more flexible way. In this way, we try to predict future conditions with respect to the present situation. Expert judgment method (EJM) is the first method introduced in 1960 for estimating software development effort [13]. Other methods such as COCOMO [3], Coco 2 [6], SLIM [14], and function points analysis [1] have been formulated since then. These methods follow an algorithmic manner. A number of studies have used linear regression [15] [8], non-linear regression [8], and regression tree [4] [5] methods. Including among algorithmic methods are attribute-based estimation (ABE) [16] and its associated compound methods [12] [11] [9] [7] [2].
Using artificial neural network is one of the simplest and most applicable methods of data modeling. In this paper, we have employed artificial neural network for modeling and estimating software projects. In the next section, neural network and its mathematical concept have been explained. Afterwards, the criteria for evaluating the precision of the estimation have been presented. Then, the proposed architecture estimator, which is based on neural network, has been described, and, eventually, it has been tested
II. Neural Network
Neural networks are simplified modeling of real neural systems that are widely used in solving various scientific problems. The scope of these networks is quite vast, ranging from classificatory applications to applications such as interpolation, estimation, detection, etc. Perhaps the most important advantage of these networks is their multiple capabilities, along with their ease of use.
1. The Concept of Network
One of the most efficient methods to solve complex problems is breaking them down into simpler sub-problems, such that each of these sub-sectors could be easier to understand and describe. In fact, a network is a collection of simple structures that together describe the final complex system. There are different types of networks, but they all have two components in common:
1) A set of nodes, with each node being the computing unit of the network which receives the inputs and processes them so as to obtain the required outputs. The processes performed by the nodes vary from simple ones - such as input collection - to the most complex computations. In special cases, a node may itself include a network.
2) Connections between nodes; these connections determine how information will pass between the nodes.
The interaction between the nodes, resulted from these connections, can lead to a general behavior displayed by the network; this behavior is such that it cannot be observed in any of the individual elementsper se. The comprehensive character of this general behavior, compared with the performance of each single node, turns the network into a powerful instrument. In short, when a simple set of elements are combined in a network, they are able to exhibit a behavior which none of the elements is able to produce alone.
2. Artificial Neural Network
As mentioned earlier, there are various types of networks. Out of these variations, there is one which considers a node as an artificial neuron. Technically, artificial neural network (ANN) is the name applied to this computational approach. An artificial neuron is actually a computational model that is based on the nerve neurons of human being. Natural neurons receive their input through the synapse. These synapses are located on the dendrites or the neuronal membrane. In a real nerve, dendrites change the amplitude of the received pulses. This alteration is not of the same type across time. Indeed, it is learned by the nerve. In case the signal is sufficiently strong (i.e. if it surpasses the threshold value), the nerve is activated and sends a signal across the axon. This signal, in turn, could enter a synapse and stimulate other nerves. Fig. 1 illustrates a real nerve.
Fig. 1. A real nerve.
3. Mathematical Model of Artificial Neural Network
When modeling the nerves, one avoids their complexities and pays attention only to their basic concepts; otherwise, the modeling procedure will be very difficult. Apart from the applied simplifications, the main difference between this model and reality is that in the real network, inputs are temporal signals while they are real numbers in this model.
There are many variations in the model presented in Fig. 2. For instance, the weights of a neural network, which transmit the output, can be positive or negative. On the other hand, there are diverse functions that can be used for thresholding. Among the most famous of these functions are arcsin, arctan, and sigmoid. These functions must be continuous, smooth, and differentiable. Also, the number of input nodes can be variable. Obviously, as the number of nodes augments, it becomes difficult to determine the weights. Therefore, one has to look at new ways of solving this problem. The process of determining optimal weights and setting their values is mainly recursive.For this purpose, the network is trained by rules and data; and using network learning capability, a variety of algorithms are recommended, all of which aim to approximate the produced output to the ideal and expected one.
Fig. 2. Mathematical Model of Artificial Neural Network
III. Equations for Estimation Error Calculation
In this study, to determine the estimation error, we have employed certain equations that can be used by many researchers in the field. Using these equations enables one to compare the results of this study with other similar works. The equations used in this article include relative error (RE), magnitude of relative error (MRE), median magnitude of relative error (MdMRE), and prediction percentage (PRED), as shown in equations 1 to 4.
| (1) |
| (2) |
| (3) |
| (4) |
IV. The Proposed Model
The purpose of this research is to use neural network for data modeling and then to use the model for prediction. Given that setting properly the parameters of an artificial neural network helps the developed network to have a more accurate model of its source data, we have suggested a method for making such a model using neural network. This new method makes use of the artificial intelligence algorithm of PSO to accurately model data using artificial neural network. Fig. 3 shows the architecture for the teaching stageof the network, and Fig. 4 illustrates the testing stageof the network of the proposed method.
In the proposed model, the data have initially been divided into the teaching and testing sections; in the teaching section, as shown in Fig. 3, PSO algorithm attempts to search the best settings for building the network. Whenever a specific setting is offered by the PSO algorithm, it is used for prediction, and its associated error is calculated; then, it is returned to the PSO algorithm as the setting feedback. Searching continues until the predetermined termination condition is fulfilled. The aim of this stage is to discover the best settings of the neural network to generate a prediction model with minimum error.
Fig. 3. Architecture of the train stage
In the teaching stage, a model was developed for prediction; now, we need to test this model in order to assess its accuracy. To test the model by the neural network, we make use of the data considered for this stage. Fig. 4 displays the architecture for the testing stage. The data of the testing stage are estimated by the network one by one, and the estimation error is calculated for each datum. The total error of estimation process is also measured based on the error of each datum, and it is introduced as the test result.
Fig. 4. Architecture of the testing stage
V. Assessment Method
In the estimation method via neural network, the arrangement of samples in the testing or teaching groups has a considerable impact on the obtained error as well as the quality of network teaching [10]. Therefore, to demonstrate the sustainability of the results of the proposed architecture, we need a method to indicate the independence of results from the location of samples. To achieve this end, there are various assessment methods such as 3 fold, 10 fold, etc. In this regard, the present study has used LOO method. In this method, each time a project is considered as a test, and it is estimated using the best parameters resulted from the testing stage. In this method, the number of projects corresponds to the number of running the testing stage. The value of MdMRE is equal to the median error derived from estimating each project.
VI. Introducing Datasets
Three datasets, including COCOMO, Desharnais, and Maxwell, have been employed to test the proposed model. These datasets have been variously used by researchers. In the following sections, they have been statistically analyzed and tested.
1. Data analysis of COCOMO dataset
COCOMO dataset consists of 63 projects, each having 17 features. Table 1 analyzes the data existing in this dataset. In this dataset, the last feature (‘actual’) is considered as the aim of estimation.
TABLE 1
COCOMO DATA ANALYSIS
Median | Mean | Minimum | Maximum | feature |
1 | 1.036349 | 0.75 | 1.4 | 'rely' |
1 | 1.004444 | 0.94 | 1.16 | 'data' |
1.07 | 1.092063 | 0.7 | 1.65 | 'cplx' |
1.06 | 1.11381 | 1 | 1.66 | 'time' |
1.06 | 1.14381 | 1 | 1.56 | 'stor' |
1 | 1.008413 | 0.87 | 1.3 | 'virt' |
1 | 0.971746 | 0.87 | 1.15 | 'turn' |
0.86 | 0.905238 | 0.71 | 1.46 | 'acap' |
1 | 0.948571 | 0.82 | 1.29 | 'aexp' |
0.86 | 0.93746 | 0.7 | 1.42 | 'pcap' |
1 | 1.005238 | 0.9 | 1.21 | 'vexp' |
1 | 1.001429 | 0.95 | 1.14 | 'lexp' |
1 | 1.004127 | 0.82 | 1.24 | 'modp' |
1 | 1.016984 | 0.83 | 1.24 | 'tool' |
1 | 1.048889 | 1 | 1.23 | 'sced' |
25 | 77.20984 | 1.98 | 1150 | 'loc' |
98 | 683.527 | 5.9 | 11400 | 'actual' |
2. Data analysis of Desharnais dataset
This dataset includes 77 projects, and 10 features have been evaluated numerically for each project. Table 2 presents the statistical characteristics of this dataset.
TABLE 2
DESHARNAISDATA ANALYSIS
Median | Mean | Minimum | Maximum | feature |
2 | 2.298 | 0 | 4 | F1 |
3 | 2.649 | 0 | 7 | F2 |
10 | 11.246 | 1 | 36 | F3 |
134 | 179.805 | 9 | 886 | F4 |
96 | 120.545 | 7 | 387 | F5 |
259 | 285.35 | 92 | 793 | F6 |
28 | 29.528 | 5 | 52 | F7 |
247 | 272.509 | 83 | 698 | F8 |
1 | 1.377 | 1 | 3 | F9 |
3542 | 4795 | 651 | 14987 | effort |
3. Data analysis of Maxwell dataset
Another dataset examined here is Maxwell which is composed of 62 projects. This dataset has numerically defined 26 features for each project, and has so far been investigated by many studies. Table 3 analyzes the data of this dataset.
TABLE 3
DATA ANALYSIS OF MAXWELL DATASET
Median | Mean | Minimum | Maximum | feature |
2 | 2.354839 | 1 | 5 | F1 |
2 | 2.612903 | 1 | 5 | F2 |
1 | 1.032258 | 0 | 4 | F3 |
2 | 1.935484 | 1 | 2 | F4 |
2 | 1.870968 | 1 | 2 | F5 |
0 | 0.241935 | 0 | 1 | F6 |
3 | 2.548387 | 1 | 4 | F7 |
3 | 3.048387 | 1 | 5 | F8 |
3 | 3.048387 | 1 | 5 | F9 |
3 | 3.032258 | 2 | 5 | F10 |
3 | 3.193548 | 2 | 5 | F11 |
3 | 3.048387 | 1 | 5 | F12 |
3 | 2.903226 | 1 | 4 | F13 |
3 | 3.241935 | 1 | 5 | F14 |
4 | 3.806452 | 2 | 5 | F15 |
4 | 4.064516 | 2 | 5 | F16 |
4 | 3.612903 | 2 | 5 | F17 |
3 | 3.419355 | 2 | 5 | F18 |
4 | 3.822581 | 2 | 5 | F19 |
3 | 3.064516 | 1 | 5 | F20 |
3 | 3.258065 | 1 | 5 | F21 |
3 | 3.33871 | 1 | 5 | F22 |
13.5 | 17.20968 | 4 | 54 | F23 |
385 | 673.3065 | 48 | 3643 | F24 |
6 | 5.580645 | 1 | 9 | F25 |
5189.5 | 8223.21 | 583 | 63694 | effort |
VII. Testing the datasets
In this section, the proposed architecture has been tested. The purpose of testing this architecture has been to evaluate its accuracy. The tests have been conducted on the datasets discussed above. The results of the tests have been analyzed and presented based on the type of each dataset. Using the criteria and equations introduced in section III, we calculated the architecture accuracy in the tests.
1. Testing Desharnais dataset
In the first test, we dealt with Desharnais dataset. The characteristics of this dataset have been given in section VI.2. The MdMRE value obtained by running the proposed architecture through LOO evaluation method has been given in Table 4. In this test, MdMRE and PRED were 0.3252 and 0.3636, respectively.
TABLE 4
THE EFFECTIVENESS OF DIFFERENT ESTIMATION METHODS IN DESHARNAIS DATASET
Pred | MdMRE | approach |
0.2987 | 0.4295 | ABE K=2 |
0.3117 | 0.3921 | ABE K=3 |
0.3247 | 0.3333 | ABE K=4 |
0.3636 | 0.3642 | ABE K=5 |
0.2857 | 0.4280 | CART |
0.2727 | 0.4140 | MLR |
0.1169 | 0.6557 | SWR |
0.3636 | 0.3252 | Proposed Model |
2. Testing COCOMO dataset
A second test has been carried out on COCOMO dataset. The characteristics of this dataset were presented in section VI.1.The related MdMRE value resulted from employing the proposed architecture through LOO evaluation has been provided in Table 5. For this test, MdMRE and PRED amounted, respectively, to 0.7496 and 0.1905.
TABLE 5
COMPARING THE EFFECTIVENESS OF DIFFERENT ESTIMATION METHODS IN COCOMO DATASET
Pred | MdMRE | approach |
0.1270 | 0.8056 | ABE K=2 |
0.1111 | 0.8013 | ABE K=3 |
0.0952 | 0.7959 | ABE K=4 |
0.1429 | 0.7679 | ABE K=5 |
0.1587 | 0.8597 | CART |
0.1746 | 1.0064 | MLR |
0.0476 | 10.6590 | SWR |
0.1905 | 0.7496 | Proposed Model |
3. Testing Maxwell dataset
The next test was performed on Maxwell dataset. The characteristics of this dataset were explained in section VI.3. The associated MdMRE value derived from employing the proposed architecture via LOO evaluation is presented in Table 6. MdMRE and PRED values in this test were 0.42 and 0.27 respectively.
TABLE 6
COMPARING THE EFFECTIVENESS OF DIFFERENT ESTIMATION METHODS IN MAXWELL DATASET
Pred | MdMRE | approach |
0.2258 | 0.5659 | ABE K=2 |
0.2097 | 0.4777 | ABE K=3 |
0.1774 | 0.5069 | ABE K=4 |
0.2097 | 0.5536 | ABE K=5 |
0.2581 | 0.5652 | CART |
0.0484 | 1.7900 | MLR |
0.1129 | 1.3495 | SWR |
0.27 | 0.42 | Proposed Model |
VIII. Conclusion
Artificial neural network has a simple operation, and one can use it for data modeling. The present study proposed an architecture based on artificial neural network for modeling and estimating software projects. The results of testing this architecture demonstrated the efficacy of this model. In this paper, PSO algorithm was used to configure the network. It is recommended that future studies also take advantage of artificial-intelligence-based methods to configure artificial neural networks.
References
[1] Albrecht, A.J. and Gaffney, J.E., 1983, Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation, IEEE Trans. Software Eng., vol. 9, no. 6, pp. 639-648, Nov.
[2] Angelis, L. and Stamelos, I., 2000, A simulation tool for efficient analogy based cost estimation. Empir Softw Eng 5(1):35–68.
[3] Boehm, B., 1981, Software Engineering Economics. Prentice Hall.
[4] Briand, L., Emam, K.E., Surmann, D. and Wieczorek, I., 1999, An Assessment and Comparison of Common Software Cost Estimation Modeling Techniques, Proc. 21st Int’l Conf. Software Eng., pp. 313-323.
[5] Briand, L., Langley, T. and Wieczorek, I., 2000, A Replicated Assessment and Comparison of Common Software Cost Modeling Techniques, Proc. 22nd Int’l Conf. Software Eng., pp. 377-386.
[6] Boehm, B., Madachy, R. and Steece, B., 2000, Software Cost Estimation with Cocomo II. Prentice Hall.
[7] Chiu, N-H. and Huang, S-J., 2007, The adjusted analogy-based software effort estimation based on similarity distances. J Syst Softw 80(4):628–640.
[8] Finnie, G., Wittig, G. and Desharnais, J.-M., 1997, A Comparison of Software Effort Estimation Techniques: Using Function Points with Neural Networks, Case-Based Reasoning and Regression Models, J. Systems and Software, vol. 39, pp. 281-289.
[9] Gupta, S., Sikka, G. and Verma, H., 2011, Recent methods for software effort estimation by analogy. SIGSOFT Softw Eng Notes 36(4):1–5.
[10] Kocaguneli, E. and Menzies, T., 2013, Software effort models should be assessed via leave-one-out validation, The Journal of Systems and Software, 1879-1890.
[11] Kocaguneli, E., Menzies, T., Bener, A. and Keung, JW., 2012, Exploiting the essential assumptions of analogy-based effort estimation. IEEE Trans Softw Eng 38(2):425–438.
[12] Milios, D., Stamelos, I. and Chatzibagias, C., 2011, Global optimization of analogy-based software cost estimation with genetic algorithms, artificial intelligence applications and innovations. L. Iliadis, I. Maglogiannis and H. Papadopoulos, Springer Boston. 364:350–359.
[13] Nelson, E.A., 1966, Management Handbook for the Estimation of Computer Programming Costs. System Developer Corp.
[14] Putnam, L.H., 1978, A General Empirical Solution to the Macro Software Sizing and Estimation Problem, IEEE Trans. Software Eng., vol. 4, no. 4, pp. 345-361.
[15] Sentas, P., Angelis, L., Stamelos, I. and Bleris, G., 2005, Software Productivity and Effort Prediction with Ordinal Regression, Information and Software Technology, vol. 47, pp. 17-29.
[16] Shepperd, M. and Schofield, C., 1997, Estimating software project effort using analogies. IEEE Trans Softw Eng 23 (11):736–743.
[17] The Standish Group, 2009, Chaos Report Technical report, http://www.standishgroup.com.