Nonparametric Model of Preferred Efficiency for Interval Scale Data (DEA)
Subject Areas : Data Envelopment Analysis
1 - استادیار دانشکده ریاضی، دانشگاه آزاد اسلامی، واحد علی آباد کتول، علی آباد کتول، ایران
Keywords: DEA, Preferred Efficiency, Interval Scale, Common Weights, Returns to Scale.,
Abstract :
The obtained benchmarks and preferred efficiency scores are strongly dependent on the selected Most Preferred Solution (MPS) units. The new directional value-based measures are more general than the existing preferred-based measures. We examine the use of interval scale (IS) variables, particularly when the IS variable represents the difference between two variables (such as sales) that serve as inputs and/or outputs. Data Envelopment Analysis models are widely used in commercial firms to assess their technical, cost, and revenue efficiencies. The new models that are introduced in this study are based on inverse DEA, which helps to preserve cost and revenue efficiency. we use a metric called Preferred Efficiency (PE) to measure the efficiency of Decision Making Units (DMUs) that have negative data derived from IS variables. PE is a concept that takes into account the DM's preferences while searching for the Most Preferred combination of DMU inputs and outputs that are efficient in DEA. Moreover, we approximate the indifference contour of the unknown Preferred Function (PF) at the MPS by using a supporting hyper plane on the production possibility set (PPS) at MPS. We propose to assume that it is tangent on the indifference contour of PF. We use radial DEA problems with Variable Returns to Scale (VRS), specifically BCC models, at the combination orientation where both outputs are maximized and inputs are minimized. We also decompose each IS variable into two Ratio Scale (RS) variables. Then, we use a compromise solution approach to generate Common Weights (CW) for the decomposed input/output variables. We will be introducing an MOLP model that uses input and output variables as objective functions. Moreover, the resulting performance evaluation scores and the corresponding methodology can be utilized to address practical issues with the aforementioned models.
Dimitris K. Despotis and Dimitris Sotiros, (2017), Value-based data envelopment analysis: A piece-wise linear programming approach, International Journal of Multicriteria Decision Making 4(1):47 – 68.#
FarzipoorSaen. Moghaddas Z., Vaez-Ghesemi, M., Hosseinzadeh Lotfi F. (2020). Stepwise pricing in evaluating revenue efficiency in Data Envelopment Analysis: A case study in power plants. R. scientia iranica. DOI.10.24200/SCI.2020.55350.4184.#
Gerami J., Mozafari M.R., Wanke P.F., Improving information reliability of non-radial value efficiency analysis: An additive shacks based measure approach, European Journal Operational Research, 298(3) (2022) 967-978.#
Halme M., Joro J., Korhonen P., Salo S., Wallenius J., A value efficiency approach to incorporating preference information in data envelopment analysis, Management Science, 45(1999) 103-115.#
Hosseinzadeh Lotfi, F., Ebrahimnejad A., Vaez-Ghasemi M., Moghaddas Z. (2020). Data envelopment analysis with R, Springer International Publishing.#
Jahanshahloo, G R Zohrebandian, M Alinezhad A Abbasian H Abbasian Naghneh, S Kiani Mavi, R (2010). Finding common weights based on the DM’s preference information. Journal of the Operational Research Society, 1796-1800.#
Joro T., Korhonen P., Zionts S., An interactive approach to improve estimates of value efficiency in data envelopment analysis, European Journal Operational Research, 149(2003) 688-699.#
Khosro Soleimani-Chamkhorami, F. Lotfi, G. Jahanshahloo, M. Rostamy-Malkha, (2020) Preserving cost and revenue efficiency through inverse data envelopment analysis models, nformation systems and operational research, 58(4):1-18.#
Korhonen P., Syrjanen M.J., on the interpretation of value efficiency, Journal Productivity Analysis, 24(2005) 197-201.#
S. Stanek, D. Kuchta, (2020), Increasing earned value analysis efficiency for IT projects, Journal of Decision Systems, 29(1):1-9.#
Tajik Yabr A.H., Najafi S.E., Moghaddas C., Shahnazaei P., Interval cross efficiency measurement for general two-stage systems, Mathematical preblems in Enginnring Hindawi,(2022) Article ID 5431358, 19 pages.#
Vaez-Ghasemi M., Moghaddas, Z., Farzipoor Saen R. (2021). Cost efficiency evaluation in sustainable supply chains with marginal surcharge values for harmful environmental factors: a case study in a food industry. Operational Research (Springer Berlin Heidelberg). 1-16.#
Zohrabandan M., Using Zionts-Wallenius method to improve estimate of value efficiency in DEA, Applied Mathematical Modeling, 35(2011) 3769-3776.#
Nonparametric Model of Preferred Efficiency forfor Interval Scale Data by Nonparametric Models (DEA)
Hossein abbasiyan*
Department of mathematics, Aliabad katoul branch, Islamic azad university, Aliabad katoul, Iran
Abstract
The obtained benchmarks and preferred efficiency scores are strongly dependent on the selected Most Preferred Solution (MPS) units. The new directional value-based measures are more general than the existing preferred-based measures. WeIn this paper, we examine the use of interval scale (IS) variables, particularly when the IS variable represents the difference between two variables (such as sales) that serve as inputs and/or outputs. Data Envelopment Analysis models are widely used in commercial firms to assess their technical, cost, and revenue efficiencies. The new models that are introduced in this study are based on inverse DEA, which helps to preserve cost and revenue efficiency. Originally, DEA models were designed to work only with technologies that had positive inputs/outputs In Data Envelopment Analysis (DEA), we use a metric called Preferred Efficiency (PE) to measure the efficiency of Decision Making Units (DMUs) that have negative data derived from IS variables. PE is a concept that takes into account the DM's preferences while searching for the Most Preferred combination of DMU inputs and outputs that are efficient in DEA. Moreover, we approximate the indifference contour of the unknown Preferred Function (PF) at the Most Preferred Solution (MPS) by using a supporting hyper plane on the production possibility set (PPS) at MPS. To obtain the supporting hyperplane wWe propose to assume that it is tangent on the indifference contour of PF. We use radial DEA problems with Variable Returns to Scale (VRS), specifically BCC models, at the combination orientation where both outputs are maximized and inputs are minimized. We also decompose each IS variable into two Ratio Scale (RS) variables. Then, we use a compromise solution approach to generate Common Weights (CW) for the decomposed input/output variables. We will be introducing an MOLP model that uses input and output variables as objective functions. These functions will be subject to the defining constraints of the production possibility set (PPS) of DEA models. Moreover, the resulting performance evaluation scores and the corresponding methodology can be utilized to address practical issues with the aforementioned models.
Furthermore, the resulting PE scores and the procedure used can be applied to solve practical problems with the aforementioned models.
Keywords:: DEA, Preferred Efficiency, Interval Scale, Common Weights, Returns to Scale.
1. Introduction
The management of business enterprises aims to minimize input costs for a given level of outputs or to maximize output income for a given level of inputs. Various methods have been developed to incorporate DM preferences into efficiency analysis, including those by Halmi et al. (1998), Korhonen et al. (2002) and Joro et al. (2003). Data Envelopment Analysis (DEA) models are widely used in business enterprises to evaluate their technical efficiency, cost and revenue. The new models introduced in this study are based on reverse DEA, which helps to maintain cost and revenue efficiency. Value efficiency (VE) interpretation by Korhonen et al. (2005) and the improved VE estimation by Zahrabandian (2011) are also relevant, and we want to use preferred efficiency (PE). The main advantage of DEA is that it does not require any preference information. However, it is possible to include DM judgments in the analysis. The obtained benchmarks and preferred performance scores are highly dependent on the selected MPUs. The new oriented value-based criteria are more general than the existing preference-based criteria. Preference-based data envelopment analysis is a recent development that appeals to preference evaluation protocols from multi-criteria decision analysis to transform original input/output data into a value scale. Collectively, these papers discuss the integration of value efficiency into data envelopment analysis (DEA) for efficiency assessment. Grami 2019 presents a multi-objective programming model to measure value efficiency using the Step Method (STEM) to solve the model. Thanassoulis 2001 discusses the assignment of input-output values in DEA based on specific DMU weights to maximize efficiency ratings. Sahoo 2014 extends the value-based models in DEA to develop directional and revenue-based efficiency measures. Bauer 2007 proposes an integrated approach to market segmentation and benchmarking in DEA, which measures product performance from the customer's perspective as customer value. In general, these papers present different approaches and perspectives on incorporating value efficiency into DEA evaluations. Negative data often come from observations of variables measured in IS, such as profit, and changes in variables such as sales and loans, which are commonly used as inputs and/or outputs in many DEA applications. However, IS data does not allow division because the zero point is not defined and only distances can be calculated. Halme et al.'s approach involves measuring the PE of each DMU as the distance to an approximate indifference line of the PF of a DM in the MPS. There are several ways to obtain the MPS, and a simple way is to first calculate the technical efficiency of the unit after decomposing the IS variables and then select from the set of efficient units. If the number of efficient units is large, the DM may need to select his MPS from this pool. We approximate the unknown PF indifference contour in the MPS by the support cloud and then calculate the PE scores for each DMU in the selected direction by comparing the inefficient units with the units with the MPS value. We use the radial models proposed for the IS data by Halme et al. of the dual mode to introduce the support superplane, which approximates the indifference contour in the MPS while maintaining the applicability of the radial model after decomposing the IS variables. The PE scores for each DMU are calculated in the output direction without solving any linear programming problem, by comparing the inefficient units with the MPS value. The method proposed in this article is not inferior to the method of Halme and his colleagues. and does not depend on the supporting hyperplane. In Section 2, we review IS data and PE analysis. Our estimates for generating a measure of PE scores are discussed in Section 3. A numerical example is presented in Section 4 and finally, Section 5 draws the final remarks.
Data Envelopment Analysis (DEA) models are widely used in commercial firms to assess their technical, cost, and revenue efficiencies. The new models that are introduced in this study are based on inverse DEA, which helps to preserve cost and revenue efficiency. The management of commercial institutions aims to minimize input costs for a given level of outputs or maximize the revenue of outputs for a specified level of inputs. Various methods have been developed to incorporate the DM's preferences into efficiency analysis, including those by Halme et al. (1998), Korhonen et al. (2002), and Joro et al. (2003). The interpretation of Value Efficiency (VE) by Korhonen et al. (2005) and the improved estimate of VE by Zohrehbandian (2011) are also relevant and we want to make use of Preferred Efficiency (PE). The basic advantage of DEA is that it doesn't require any preference information. However, incorporating the DM's judgments into the analysis is possible. The obtained benchmarks and preferred efficiency scores are strongly dependent on the selected MPU units. The new directional value-based measures are more general than the existing preferred-based measures. Preferred-based data envelopment analysis is a recent development that resorts to preferred assessment protocols from multiple criteria decision analysis to transform the original input/output data to a value scale. These papers collectively discuss the incorporation of value efficiency in Data Envelopment Analysis (DEA) for assessing efficiency. Gerami 2019 presents a multi-objective programming model to measure value efficiency, using the Step Method (STEM) to solve the model. Thanassoulis 2001 discusses the imputation of input-output values in DEA based on DMU-specific weights to maximize efficiency ratings. Sahoo 2014 extends value-based models in DEA to develop new directional cost- and revenue-based measures of efficiency. Bauer 2007 proposes an integrated approach for market partitioning and benchmarking in DEA, measuring product efficiency from the customer's perspective as customer value. Overall, these papers provide different approaches and perspectives on incorporating value efficiency in DEA assessments. Negative data often results from observations of variables measured on the IS, such as profit, and changes in different variables like sales and loans, which are commonly used as inputs and/or outputs in many DEA applications. However, data on the IS does not allow division since the zero point is not defined, and only distances can be calculated. Halme et al.'s approach involves measuring the PE of each DMU as the distance to an approximated indifference contour of a DM's PF at MPS. Different methods exist for obtaining an MPS, and one simple way is to first compute the technical efficiency of the unit after decomposing the IS variables and then make a choice from the set of efficient units. If the number of efficient units is large, the DM may need to pick his/her MPS from this set. We approximate the indifference contour of the unknown PF at MPS by the supporting hyperplane and then calculate the PE scores for each DMU in the selected direction by comparing the inefficient units to units having the same value as the MPS. We use the proposed radial models for the IS data by Halme et al. from the dual to introduce the supporting hyperplane, which approximates the indifference contour at MPS while maintaining the applicability of the radial model after decomposing IS variables. The PE scores are calculated for each DMU in the output direction without solving any linear programming problems, by comparing the inefficient units having the same value as the MPS. The proposed method in this paper is not inferior to the method of Halme et al. and does not depend on the supporting hyperplane. In section 2, we review the IS data and PE analysis. Our estimations to produce a measure of PE scores are discussed in section 3. A numerical example is presented in section 4, and finally, section 5 draws the conclusive remarks.
2. Research Methodology (Methodology (PPreferred EEnvelopment Analysis))
We have often observed negative data values in our analysis. This happens when we subtract two scale ratio (RS) variables from each other. Pasteur provides examples of such variables in the DEA literature, including the growth rate of GDP per capita, profits, and taxes (where profits are equal to revenue minus costs). We suggest replacing the main IS variable with two RS variables, even if the main variable is positive in the data. This is because division on the distance scale is not allowed. Here is how the IS variable is decomposed into the RS variables: Suppose inputs are measured from sum X and outputs are measured from sum Y in IS. Replace each with two RS variables whose difference is the main variable. The new input matrix contains the new RS input variables that originate from the IS input variable (minuends) followed by the RS variables that originate from the IS output variable (subtrahends). In the output matrix, first the new output variables derived from the IS input variables (subvariables that correspond to the IS input variable) are listed, and then the new output variables corresponding to the main IS outputs (minuends) are listed. The coefficients of the new RS variables in the dual formula are equal. Each new constraint on the dual creates a new variable, denoted here by Z. After decomposing the IS variables into an input and an output, the dual radial composite BCC model is as follows:
We have frequently observed negative data values in our analysis. This happens when we deduct two Ratio Scale (RS) variables from each other. Pastor (1994) gives examples of such variables in the DEA literature, including the rate of growth of gross domestic product per capita, profit, and taxes (where profit equals income minus cost). We suggest replacing the original IS variable with the two RS variables, even if the original variable happens to be positive in the data. This is because division on the interval scale is not allowed. Here's how we decompose the IS variable into the RS variables: Assume inputs among the total of X, and outputs among the total of Y, have been measured on the IS. Replace each with two RS variables whose difference is the original variable. The new input matrix contains the new RS input variables originating from the IS input variable (minuends) followed by the RS variables that originate from the IS output variable (subtrahends). In the output matrix, the new output variables originating from the IS input variables are listed first (the subtrahends in the difference that correspond to the IS input variable) followed by the new output variables corresponding to the original IS outputs (minuends). The coefficients of the new RS variables are set equal in the dual formulation. Each resulting new constraint in the dual creates a new variable, denoted here by Z, in the primal. After decomposing the IS variables into one input and output each, the radial combined BCC dual model is as follows:
, (1)
; ;
The model (1) is the dual of the following radial combined BCC primal model.
(2)
;
It is worth noting that in addition to the model mentioned above, there are also input-oriented and output-oriented models that can be taken into consideration. If we set to zero in equation (2), we obtain the output-oriented formulation. Analogously, we can derive the input-oriented model. It is worth noting that units that are efficient before decomposition remain efficient even after the process. However, when there is an increase in variables in DEA, inefficient units may become efficient. In such cases, only the scores of the previously inefficient units change. For a more detailed explanation, please refer to the paper "Dealing with Interval Scale Data in DEA" by Halme et al. (1998). PE Analysis (PEA) is a process used to measure the efficiency of each unit in relation to the indifference contour of the DM's Preferred Function (PF) that passes through the Most Productive Scale (MPS) point. To evaluate the efficiency of each unit, we need a clear understanding of the PF. The PEA integrates the DM's preference information on a desirable combination of inputs and outputs into the analysis. This helps us evaluate the efficiency of each unit. MPS is a virtual or existing DMU on the efficient frontier with the most desirable values of inputs and outputs. However, in practice, the PF is unknown, and we cannot precisely characterize the indifference curve. Therefore, we have to approximate it. Halme et al. (1999) assumed that the DM’s (unknown) PF is pseudo-concave, and strictly increasing in (i.e. strictly increasing in and strictly decreasing in) and with a maximal value , at MPS. In the following models point (unit) is preferred inefficient with respect to any strictly increasing pseudo concave PF with a maximum at point, if the optimum value of the following problem is strictly positive:
; (3)
,; ,
And are corresponding to the MPS:
(4)
; ,
, ; , .
In a typical primal DEA model, all variables are assumed to have positive values. However, in a modified DEA model, certain variables can have negative values, which allows for the inclusion of value judgments through the use of the MPS. This adjustment improves the DEA model's ability to analyze decision-making units comprehensively.
3. Research FindingsMethodology
In this section, we will discuss the various methods that are used in our research project. These methods have been carefully selected based on their effectiveness in achieving the research objectives. We will provide a detailed explanation of each method, including its advantages and limitations, and how it contributes to the overall success of the project.
Method 1: Using common weights
We can reform the model (3) for the IS variables with attention to model (2). In model (4), we can put.
(*)
(*)
(5)
; .
We have the ability to modify the constraints (*) in the model (5) in order to make adjustments as needed. , . By conversion and utilizing the variable instead of , we can obtain the following model.
(6)
,
Where is corresponds to the MPS ( ). The dual of the model (6) is as follows:
(7)
, ,
; .
Where and are the weights to be applied to the outputs and inputs, respectively. The optimum solution of the above problem, say is associated with the normal vector of a supporting hyperplane that constants the PPS in only one of the half-spaces and pass among that and the MPS. Our aim is to introduce an MOLP for finding CW, which, with its efficient solution, allows us to obtain a tangent hyperplane at MPS and approximate the indifference contour of the unknown PF. Firstly, we introduce the following model (corresponding to).
(8)
, ,
, (*); .
Where is an optimum value obtained from the model (5), when is under consideration. It notes that the MPS is on the efficient frontier and also is the most preferred solution, so, usually, we have . Introducing the constraint (*) in the model (8) is to have normalized weights and uniqueness of the optimal solution. We present an MOLP problem as follows for identifying CW.
[Precisely the constraints of the model (10)] (9)
To solve the MOLP model (9), we utilize compromise programming to generate a deviation score vector that is the closest to the scores computed from the model (8). An ideal solution is represented by a vector of zero scores. The mathematical programming used for this purpose is called the compromise solution with a parameter.
[Precisely the constraints of the model (8)] (10)
First, we set and then convert it to a mini-max problem for the weights that its linear model is as follows:
(11)
[The constraints that remain are exactly the same as the constraints in the model (8).
Solving model (11) gives us a CW. We can then obtain the tangent hyperplane at MPS, which approximates the unknown PF’s indifference contour. Our aim is to calculate the value of that , where is the projected point of on the indifference contour PF at MPS. However, we don’t know the exact location of PF, so we can use the tangent hyperplane instead of its indifference contour. Hence, we must have:. We want to measure PE scores only in output orientation. Therefore, we will use the output-oriented direction that thus we have. Note that we are assuming both new variables in decomposing the IS variable into two ratio scale variables are objectives and do not consider non-discretionary variables.
Method 2: Using supporting hyperplane
We use the supporting hyperplane on PPS in MPS to approximate the indifference contour of the unknown PF. We obtain the weights of the output/input variables as the normal vector of the supporting hyperplane through the dual of the following model.
(12)
; ; If .
After decomposing IS variables, is corresponds to the MPS. The dual of the model mentioned above is as follows:
,
, (13)
; ; , ;
The hyperplane obtained from model (13) is tangent to the PPS at that and In fact, these are the reference DMUs of MPS, which is efficient and set on the efficient frontier and usually, the set is included only from MPS. Therefore, this hyperplane passes through MPS. First, we obtain MPS by computing the technical efficiency of each DMU (after decomposing IS variables) and selecting MPS among the efficient DMUs using aid DM. We need to find an approximate value for such that, where is the projected point of on the indifference contour PF at MPS which utilizes the supporting hyperplane at MPS instead of the indifference contour PF at MPS. We utilize the model (6) and suppose that is its optimal solution. The equation of the supporting hyperplane of PF at MPS can be determined as. Hence,. To clarify, what I mean is . We only receive PE scores in the output orientation. In order to achieve our desired outcome, we prioritize the output-oriented direction, that thus we have: . Note that we consider both new variables as objectives when decomposing the IS variable into two ratio scale variables, without considering the case when one new variable is non-discretionary.
4. Numerical Example
In this section, we will be using the data presented in Table 1 to demonstrate the effectiveness of the revised approach proposed in this work. Specifically, we will be assessing The PE of 14 units, each of which consumes one input to produce two outputs. However, it’s worth noting that some of the decision-making units (DMUs) have negative data, which is a result of the IS output variable O1.
| U1 | U2 | U3 | U4 | U5 | U6 | U7 | U8 | U9 | U10 | U11 | U12 | U13 | U14 |
I | 50 | 48 | 49 | 49 | 48 | 50 | 47 | 47 | 45 | 48 | 47 | 35 | 19 | 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
O1
| -16 38 54 | -17 32 49 | -6 33 39 | 5 36 31 | 4 35 31 | -12 19 31 | 3 31 28 | -14 26 40 | 2 30 28 | -4 31 35 | 1 21 20 | 1 19 18 | 3 7 4 | -5 6 11 |
O2 | 58 | 48 | 45 | 35 | 34 | 25 | 25 | 25 | 16 | 15 | 14 | 13 | 4 | 4 |
Table1. Their input/output variable values.
We need that decompose the IS output variable O1 into two RS variables that are generated by the difference two the RS values. For this example we have:,
, , .
| U1 | U2 | U3 | U4 | U5 | U6 | U7 | U8 | U9 | U10 | U11 | U12 | U13 | U14 |
| 54 | 49 | 39 | 31 | 31 | 31 | 28 | 40 | 28 | 35 | 20 | 18 | 4 | 11 |
| 50 | 48 | 49 | 49 | 48 | 50 | 47 | 47 | 45 | 48 | 47 | 35 | 19 | 23 |
| 38 | 32 | 33 | 36 | 35 | 19 | 31 | 26 | 30 | 31 | 21 | 19 | 7 | 6 |
| 58 | 48 | 45 | 35 | 34 | 25 | 25 | 25 | 16 | 15 | 14 | 13 | 4 | 4 |
Table 2. The new variables values after decomposing.
Method 1: After decomposing and utilizing the data from Table 2, we can compute the technical efficiency for each DMU using the new variable values. We identified that each unit U1, U4, and U13 is efficient. We pick the unite U13 as MPS and. The variable can be viewed either as output or input and considered as input. Solving the compromise model leads to an efficient solution, which is a CW for output/input variables:, , , , .
Method 2: We have selected unite U1 as MPS () and both variables are as objectives. While variable can be viewed as either output or input. We have considered it as input for our analysis. To find the output/input weights, we have used the following model:
, …,
(14)
; ,
The obtained weights are
DMUs | U1 | U2 | U3 | U4 | U5 | U6 | U7 |
The Method 1 | 0.000 | 0.111 | 0.019 | 0.000 | 0.013 | 0.543 | 0.172 |
The Method 2 | 0.119 | 0.143 | 0.095 | 0.125 | 0.017 | 0.606 | 0.140 |
The Halme et al. | 0.000 | 0.075 | 0.008 | 0.000 | 0.001 | 0.118 | 0.045 |
DMUs | U8 | U9 | U10 | U11 | U12 | U13 | U14 |
The Method 1 | 0.524 | 0.367 | o.586 | 0.6398 | 0.321 | 0.000 | 1.243 |
The Method 2 | 0.583 | 0.281 | o.493 | 0.4973 | 0.262 | 0.000 | 1.281 |
The Halme et al. | 0.130 | 0.031 | 0.058 | 0.1934 | 0.163 | 0.000 | 0.153 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Table 3: Obtained PE scores
5. Conclusion
The paper proposes using scalarization-based methods for multi-objective optimization to produce a range of Pareto optimal solutions. Decision-makers can then select the best solution based on their preferences. Our paper introduces a new technique for estimating desired efficiency. We accomplish this by utilizing the supporting hyperplane at MPS and assuming that it is tangent to the unknown preferred function. The weights of different solutions on the Pareto front are adaptively determined by using information from the previously obtained solutions' positions. We also assume that we can find common weights by solving a dual model for the decomposed IS variables. A novel optimal model based on deviation is proposed to determine objective weights. These weights will help us calculate the PE scores more accurately. Though we can use methods that utilize the original IS variables without decomposing the data, finding the most preferred weights will improve the accuracy of the PE scores. There is also the option of using CCR models instead of BCC models, which will change the PPS and the supporting hyperplane, resulting in a different measure of PE. While using optimal weights of each decision-maker unit instead of common weights may take us away from the desired PE for each unit, the decision-maker's preferences can help achieve real performance. Finally, this process can also be used to obtain cost efficiency, provided that the cost function is unknown.
In this paper, we introduce a new method to estimate preferred efficiency by using the supporting hyperplane at MPS. We assume that this hyperplane is tangent to the (unknown) preferred function at MPS, and that we can find common weights by solving a dual model for the decomposed IS variables. These weights are then used to calculate the PE scores. We can also use methods that utilize the original IS variables without decomposing the data, but finding the most preferred weights will help us obtain more accurate PE scores. It is possible to use CCR models instead of BCC models, which will change the PPS and the supporting hyperplane, thereby affecting the measure of PE. While using the optimal weights of each decision-maker unit instead of common weights may take us away from the desired PE for each unit, the decision-maker's preferences can help achieve real performance. Finally, this process can also be used to obtain cost efficiency, provided that the cost function is unknown.
References
Dimitris K. Despotis and Dimitris Sotiros, (2017), Value-based data envelopment analysis: A piece-wise linear programming approach, International Journal of Multicriteria Decision Making 4(1):47 – 68.
FarzipoorSaen. Moghaddas Z., Vaez-Ghesemi, M., Hosseinzadeh Lotfi F. (2020). Stepwise pricing in evaluating revenue efficiency in Data Envelopment Analysis: A case study in power plants. R. scientia iranica. DOI.10.24200/SCI.2020.55350.4184.
[1] Charnes, W.W. Cooper, E. Rhoders, Measuring the efficiency of decision-making units, 2(1978) 429-444.
[2] M. Halme, T. Joro, P. Korhonen, S. Salo, J. Wallenius, A value efficiency approach to incorporating preference information in data envelopment analysis, Manage, Sci, 45(1999) 103-115.
[3] T. Joro, P. Korhonen, S. Zionts, An interactive approach to improve estimates of value efficiency in data envelopment analysis, Eur. JJ. Oper. Res., 149(2003) 688-699.
[4] P. Korhonen, A. Siljamaki, M. Soismaa, On the use of value efficiency analysis and further developments, J. Prod. Anal., 17(2002) 49-64.
[5] P. Korhonen, M.J. Syrjanen, On the interpretation of value efficiency, J. Prod. Anal, 24(2005) 197-201.
[6] C.A.K. Lovell, J. Pastor, Units invariant and translation invariant DEA models, Oper. Res., 18(1995) 147-151.
[7] J. Pastor, Translation invariance in Data Envelopment Analysis: A generalization, Anal. Oper. Res., 66(1996) 93-102.
[8] M. Zohrabandan, Using Zionts-Wallenius method to improve estimate of value efficiency in DEA, J. Elsevier, Applied Mathematical Modeling, 35(2011) 3769-3776.
[9] M. Halme, T. Joro, M. Koivu, Dealing with interval scale data in Data Envelopment Analysis, IIASA (1998).
[10] M. Halme, P. Korhonen, Restricting weights in value efficiency analysis, IIASA (1999).[1] Charnes W.W., Cooper E., Rhoders, Measuring the efficiency of decision making units, 2(1978) 429-444.
[2] Gerami J., Mozafari M.R., Wanke P.F., Improving information reliability of non-radial value efficiency analysis: An additive shacks based measure approach, European Journal Operational Research, 298(3) (2022) 967-978.
[3] Halme M., Joro T., Koivu M., Dealing with interval scale data in Data Envelopment Analysis, IIASA (1998).
[4] Halme M., Korhonen P., Restricting weights in value efficiency analysis, IIASA (1999).
[5] Halme M., Joro J., Korhonen P., Salo S., Wallenius J., A value efficiency approach to incorporating preference information in data envelopment analysis, Management Science, 45(1999) 103-115.
Hosseinzadeh Lotfi, F., Ebrahimnejad A., Vaez-Ghasemi M., Moghaddas Z. (2020). Data envelopment analysis with R, Springer International Publishing.
Jahanshahloo, G R Zohrebandian, M Alinezhad A Abbasian H Abbasian Naghneh, S Kiani Mavi, R (2010). Finding common weights based on the DM’s preference information. Journal of the Operational Research Society, 1796-1800.
[6] Joro T., Korhonen P., Zionts S., An interactive approach to improve estimates of value efficiency in data envelopment analysis, European Journal Operational Research, 149(2003) 688-699.
Khosro Soleimani-Chamkhorami, F. Lotfi, G. Jahanshahloo, M. Rostamy-Malkha, (2020) Preserving cost and revenue efficiency through inverse data envelopment analysis models, nformation systems and operational research, 58(4):1-18.
[7] Korhonen P., Siljamaki A., Soismaa M., on the use of value efficiency analysis and further developments, Journal Productivity Analysis, 17(2002) 49-64.
[8] Korhonen P., Syrjanen M.J., on the interpretation of value efficiency, Journal Productivity Analysis, 24(2005) 197-201.
S. Stanek, D. Kuchta, (2020), Increasing earned value analysis efficiency for IT projects, Journal of Decision Systems, 29(1):1-9.
[9] Lovell C.A.K., Pastor J., Units invariant and translation invariant DEA models, Operational Research, 18(1995) 147-151.
[10] Pastor J., Translation invariance in Data Envelopment Analysis: A Generalization, Annals of Operational Research, 66 (1996) 91-102.
[11] Tajik Yabr A.H., Najafi S.E., Moghaddas C., Shahnazaei P., Interval cross efficiency measurement for general two-stage systems, Mathematical preblems in Enginnring Hindawi,(2022) Article ID 5431358, 19 pages.
Vaez-Ghasemi M., Moghaddas, Z., Farzipoor Saen R. (2021). Cost efficiency evaluation in sustainable supply chains with marginal surcharge values for harmful environmental factors: a case study in a food industry. Operational Research (Springer Berlin Heidelberg). 1-16.
[12] Zohrabandan M., Using Zionts-Wallenius method to improve estimate of value efficiency in DEA, Applied Mathematical Modeling, 35(2011) 3769-3776.
FarzipoorSaen. Moghaddas Z., Vaez-Ghesemi, M., Hosseinzadeh Lotfi F. (2020). Stepwise pricing in evaluating revenue efficiency in Data Envelopment Analysis: A case study in power plants. R. scientia iranica. DOI.10.24200/SCI.2020.55350.4184.
Jahanshahloo, G R Hossinzadeh Lotfi F Khanmohammadi M Kazemimanesh M and Rezai V (2010). Ranking of units by positive idea DMU with common weights. Expert System Applications, 37(12), 7483-7488.
Jahanshahloo, G R Zohrebandian, M Alinezhad A Abbasian H Abbasian Naghneh, S Kiani Mavi, R (2010). Finding common weights based on the DM’s preference information. Journal of the Operational Research Society, 1796-1800.
Hosseinzadeh Lotfi, F., Ebrahimnejad A., Vaez-Ghasemi M., Moghaddas Z. (2020). Data envelopment analysis with R, Springer International Publishing.Vaez-Ghasemi M., Moghaddas, Z., Farzipoor Saen R. (2021). Cost efficiency evaluation in sustainable supply chains with marginal surcharge values for harmful environmental factors: a case study in a food industry. Operational Research (Springer Berlin Heidelberg). 1-16.
Khosro Soleimani-Chamkhorami, F. Lotfi, G. Jahanshahloo, M. Rostamy-Malkha, (2020) Preserving cost and revenue efficiency through inverse data envelopment analysis models, nformation systems and operational research, 58(4):1-18.
S. Stanek, D. Kuchta, (2020), Increasing earned value analysis efficiency for IT projects, Journal of Decision Systems, 29(1):1-9.
Dimitris K. Despotis and Dimitris Sotiros, (2017), Value-based data envelopment analysis: A piece-wise linear programming approach, International Journal of Multicriteria Decision Making 4(1):47 – 68.