یک استراتژی آگاهانه سبز برای قرار دادن ماشین مجازی در مراکز داده ابری
محورهای موضوعی : Computer Engineeringهدایت نصراللهی ماتک 1 , همایون موتمنی 2 , بهنام برزگر 3 , ابراهیم اکبری 4 , حسین شیرکاهی 5
1 - 1Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran
2 - Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran
3 - Department of Computer Engineering, Babol Branch, Islamic Azad University, Babol, Iran
4 - Department of Computer engineering, Sari Branch, Islamic Azad University, Sari, IRAN.
5 - Department of computer engineering, Jouybar branch, Islamic Azad University, Jouybar, Iran
کلید واژه: رایانش ابری, قرار دادن ماشین مجازی, بهینه سازی چند هدفه, الگوریتم های فراابتکاری.,
چکیده مقاله :
این مقاله روشی را برای بهینهسازی مشکل تامین ماشین مجازی با هدف دوگانه، که یک چالش در مراکز داده ابری است، ارائه میکند. در محیط ابری، تعادل بین منافع ارائه دهندگان خدمات و مشتریان مهم است. از دیدگاه تولیدکنندگان، بهینه سازی مصرف انرژی و کاهش هزینه ها ضروری است. از دیدگاه کاربران، دستیابی به سطح مناسبی از کیفیت خدمات مطلوب است و تأخیر شبکه یکی از عواملی است که به کاهش آن کمک می کند. بنابراین، بهینه سازی استفاده از پهنای باند برای کاهش تاخیر شبکه، دومین هدف مهمی است که در این مطالعه در نظر گرفته شده است. برای حل این مشکل، یک روش دو هدفه مبتنی بر الگوریتم ژنتیک ارائه شده است که نتایج نزدیک به بهینه را در زمان قابل قبولی ارائه می دهد. ارزیابیها برتری الگوریتم پیشنهادی را از نظر مصرف انرژی کل و ترافیک کل در شبکه در مقایسه با روشهای مبتنی بر الگوریتم ژنتیک، کلونی مورچهها، الگوریتم FFD حریص و روش استقرار تصادفی نشان میدهد.
Abstract – This paper presents a method for optimizing the dual-target virtual machine provisioning problem, which is a challenge in cloud data centers. In the cloud environment, it is important to balance the interests of service providers and customers. From the producers’ viewpoint, optimizing energy consumption and reducing costs are essential. From the users’ point of view, it is desirable to achieve an adequate level of quality of service, and network latency is one of the factors that contribute to its reduction. Therefore, optimizing bandwidth usage to reduce network delay is the second important objective considered in this study. To solve this problem, a two-objective method based on a genetic algorithm is presented, which provides near-optimal results in an acceptable time. The evaluations show the superiority of the proposed algorithm in terms of total energy consumption and total traffic in the network compared with methods based on a genetic algorithm, ant colony, greedy FFD algorithm, and randomized deployment method.
[1] J. Masoudi, B. Barzegar and H. Motameni, "Energy-Aware Virtual Machine Allocation in DVFS-Enabled Cloud Data Centers," in IEEE Access, vol. 10, pp. 3617-3630, 2022, doi: 10.1109/ACCESS.2021.3136827.
[2] Y. Gao, H. Guan, Z. Qi, Y. Hou, and L. Liu, "A multi-objective ant colony system algorithm for virtual machine placement in cloud computing," Journal of computer and system sciences, vol. 79, no. 8, pp. 1230-1242, 2013.
[3] A. Gopu and N. Venkataraman, "Optimal VM placement in distributed cloud environment using MOEA/D," Soft Computing, vol. 23, no. 21, pp. 11277-11296, 2015.
[4] F. L. Pires and B. Barán, "Multi-objective virtual machine placement with service level agreement: A memetic algorithm approach," IEEE, pp. 203-210, 2013
[5] D. Kliazovich, P. Bouvry, and S. U. Khan, "GreenCloud: a packet-level simulator of energy-aware cloud computing data centers," The Journal of Supercomputing, vol. 62, no. 3, pp. 1263-1283, 2012.
[6] D. Serrano et al., "SLA guarantees for cloud services," Future Generation Computer Systems, vol. 54, pp. 233-246, 2016.
[7] M.-H. Malekloo, N. Kara, and M. El Barachi, "An energy efficient and SLA compliant approach for resource allocation and consolidation in cloud computing environments," Sustainable Computing: Informatics and Systems, vol. 17, pp. 9-24, 2018.
[8] G. Cao, "Topology-aware multi-objective virtual machine dynamic consolidation for cloud datacenter," Sustainable Computing: Informatics and Systems, vol. 21, pp. 179-188, 2019.
[9] A. Jobava, A. Yazidi, B. J. Oommen, and K. Begnum, "On achieving intelligent traffic-aware consolidation of virtual machines in a data center using Learning Automata," Journal of computational science, vol. 24, pp. 290-312, 2018.
[10] G. L. Stavrinides and H. D. Karatza, "An energy-efficient, QoS-aware and cost-effective scheduling approach for real-time workflow applications in cloud computing systems utilizing DVFS and approximate computations," Future Generation Computer Systems, vol. 96, pp. 216-226, 2019.
[11] J. Krzywda, A. Ali-Eldin, T. E. Carlson, P.-O. Östberg, and E. Elmroth, "Power-performance tradeoffs in data center servers: DVFS, CPU pinning, horizontal, and vertical scaling," Future Generation Computer Systems, vol. 81, pp. 114-128, 2018.
[12] P. Festa, "A brief introduction to exact, approximation, and heuristic algorithms for solving hard combinatorial optimization problems," IEEE, pp. 1-20, 2014
[13] Randy L.Haupt and sue Ellen Haupt. "Parctical Genetic Algorithm" (2nd ed),USA:Wiley. 2004.
[14] A.Alkan and E.Ozcan. “Memetic Algorithms for Timetabling", Evolutionary Computation, 2003. CEC '03. The 2003 Congress on, 3, pp 1796-1802. 2003.
[15] Shaw, R., Howley, E., & Barrett, E. (2019). An energy efficient anti-correlated virtual machine placement algorithm using resource usage predictions. Simulation Modelling Practice and Theory, 93, 322-342. 2019.
[16] Tordsson, J., Montero, R. S., Moreno-Vozmediano, R., & Llorente, I. M. Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers. Future generation computer systems, 28(2), 358-367. 2012.
Journal of Applied Dynamic Systems and Control,Vol.7, No.3, 2024:60-66
| 61 |
A Green-aware Strategy for Virtual Machine Placement in Cloud Datacenters
H. Nasrolahi Matak1, H. Motameni2*, B. Barzegar3, E. Akbari4, H. Shirgahi5
1,2,4 Department of Computer Engineering, Sari Branch, Islamic
Azad University, Sari, Iran
3Department of Computer Engineering, Babol Branch, Islamic Azad University, Babol, Iran
5Department of Computer Engineering, Jouybar Branch, Islamic
Azad University, Jouybar, Iran
*Corresponding Author Institutional Email: motameni@iausari.ac.ir
Received: 2024.08.22; Accepted: 2024.09.29
Abstract–This paper presents a method for optimizing the dual-target virtual machine provisioning problem, which is a challenge in cloud data centers. In the cloud environment, it is important to balance the interests of service providers and customers. From the producers’ viewpoint, optimizing energy consumption and reducing costs are essential. From the users’ point of view, it is desirable to achieve an adequate level of quality of service, and network latency is one of the factors that contribute to its reduction. Therefore, optimizing bandwidth usage to reduce network delay is the second important objective considered in this study. To solve this problem, a two-objective method based on a genetic algorithm is presented, which provides near-optimal results in an acceptable time. The evaluations show the superiority of the proposed algorithm in terms of total energy consumption and total traffic in the network compared with methods based on a genetic algorithm, ant colony, greedy FFD algorithm, and randomized deployment method.
Keywords: Cloud computing, Virtual machine placement, Multiobjective optimization, Meta-heuristic algorithms
1. Introduction
Cloud computing has emerged because of the irregular growth of computers, and telecommunication systems have started to offer various services to users over the internet. Users should connect to data centers to use on-demand services. The computing needs of users have significantly increased the energy consumption of data centers that becomes a challenge for cloud computing [1]. Cloud computing is now recognized worldwide as an integral mechanism of information technology. Considering the variety of services such as infrastructure, platform and software as a service, cloud computing plays an undeniable role in hosting and providing services on the Internet. The benefits of using cloud facilities for individuals and organizations include reliability, quality of service, and strength [2]. The service producer and consumer, which play a central role in the interactions of the cloud environment, have different interests in this area. From the customer’s point of view, it is desirable to receive services of the highest quality, in accordance with the service level agreement, with as few violations as possible and a minimum payment for the use of each service. From the producer’s perspective, it is desirable to reduce energy consumption, reduce waste of resources, reduce costs, and comply with the provisions of the Service Level Agreement to gain and maintain customer confidence [3].
The concept of virtualization is based on the fact that different user programs (virtual machines (VMs)) can be executed on some servers (Physical Machines (PMs)). This process is called virtual machine provisioning and belongs to the category of NP-hard problems in terms of time complexity [4]. Despite the benefits of virtualization, real-world experience shows that a strict reduction in energy consumption and resource waste can pose a threat to quality of service requirements (such as throughput, response time and network latency), that are written and documented in users’ service level agreements. Conversely, communication links, switching between physical machines, and the collection of data sent in different layers of the network are responsible for more than 30% of the total energy consumption of data centers[5]. Moreover, more than 70% of the data traffic in a data center is caused by data exchange between virtual machines[6]. As a rule, in large data centers communication between virtual machines not only significantly increases energy consumption, but can also prove to be a serious bottleneck for quality of service requirements (such as response time and delay) [7-9]. This omission can lead to an increased possibility of a breach of the service level agreement, which robs the user of confidence and satisfaction and may result in the application of various types of fines by the user, depending on the nature, extent, and severity of the breach [10]. A modern and promising method to address this challenge is to place virtual machines with a lot of data exchange at the smallest possible physical distance from each other. Therefore, when using virtual machines, a trade-off must be found between reducing the energy consumption of computer and network communication devices and reducing the data exchange between virtual machines over long distances that saturate the bandwidth and network topology[11]. Furthermore, consideration should be given to the possible bandwidth depending on the network topology [12].
In [13] presented architectural principles for managing energy consumption in the cloud, as well as policies for allocating energy resources and scheduling algorithms that meet the expectations for quality of service and device power consumption. Local administrators send information about resource utilization and virtual machines selected for migration to global administrators. Various methods have been proposed to assign virtual machines to physical nodes. The assignment problem is divided into two parts. The first part refers to the replacement of virtual machines on physical hosts, and the second part refers to the optimization of virtual machine allocation. In [14] presented a new working framework called Green Cloud Computing. In their study, virtual machine management and scheduling is considered as one of the fundamental principles for reducing energy consumption. The author introduced his main method for reducing energy consumption as shutting down physical machines with low utilization and migrating virtual machines to other physical machines. In [15] investigated a physical machine integration algorithm that was periodically deployed to minimize online machines in terms of required online capacity and possible SLA violations. They analyzed different workload profiles and show that intermittent workloads are better suited for dynamic acquisition. The mapping step is performed using a heuristic first-fit method. while reducing the number of online users. To optimize the allocated resources in cloud computing, In [16] presented a scheduling and resource allocation method based on the PSO method, in which multiple queues are considered for resource allocation. Queues are used to manage and schedule tasks that are scarce at the time of resource allocation and are actually placed in the line. Green computing is a trend in computer science that seeks to reduce the energy consumption and carbon footprint of computers in distributed platforms such as clusters, networks, and clouds. Recent studies have estimated that data centers account for approximately 1.5%–2 % of the total energy consumption. This energy demand has sharply increased because of the generalization of Internet services and distributed computing platforms such as clusters, networks, and clouds. Regarding the efficiency of data centers, studies show that approximately 55% of the energy used in a data center is consumed by the computer system and the rest by the support system. For this reason, green cloud computing is essential to make the future growth of cloud computing sustainable. Users also want the services they require to be completed faster and in less time. Therefore, we are trying to find a suitable solution to solve these problems in cloud computing. The goal of this research is to optimize the dual objectives of energy consumption and network traffic load sharing when deploying virtual machines in cloud data centers with tree topologies.
2. Instrumentation
2-1- Formulation of the proposed method
To construct a static two-objective VMP model for a DC with n VM and m PM, the mathematical model for each of the objectives such as the energy consumption of servers and network switches and the bandwidth consumption, is first formulated along with the corresponding constraints. The comprehensive model of dual-objective optimization is presented. Then, the presented model is coded and implemented using a genetic algorithm. The results of the GA for the local search for optimal answers are fed into the complementary algorithm of the local search, and the improved results resulting from the combination of GA and the complementary algorithm are analyzed and evaluated.
2-2- Energy consumption model of the servers
It is obvious that the physical equipment of DCs consists of electrical and electronic parts, all of which consume electricity. Most of the total DC energy consumption is caused by the operation of the PMs, but other equipment such as switches, routers, and cooling devices also consume energy, which is also considered in this research, the energy consumption model of the switch. This section focuses on PM energy consumption in two states: full state (when PM hosts VMs so that all CPU capacity is occupied) and idle mode (when PM is not assigned to any VM). It should be noted that there is a linear relationship between energy consumption and CPU efficiency [74]. Therefore, the PMj energy consumption can be expressed as Eq. (1)
(1) |
|
Whereand represent the average values of energy consumption in the full efficiency state and the idle state PMJ, respectively.
The total power consumption (TPC) of servers in DC with m PM can be calculated from Eq. (2):
(2)
Where a binary decision variable is equal to one if PMj is on and zero otherwise.
2-3-Bandwidth consumption model
The bandwidth consumption model presented in this section can be extended to other types of topologies without losing its generality. The tree model consists of identical switches with four ports and homogeneous communication lines that communicate with each other between the three layers (Core-Aggregation/Aggregation-Access). In the VMP, any VM can be placed on any PM. Therefore, the physical distance between PMs hosting interdependent VMs can be calculated by counting the hops. This concept refers to the number of communication lines that must be crossed to route data from the source PM to the destination PM [1]. On the other hand, different VMs can be independent or dependent on each other in terms of data. For example, suppose that VMi and VMj are located on PMk and PMℓ respectively, with an average data dependency rate of dv. The best scenario is when PMk and PMℓ are the same, meaning k=l. Otherwise, PMk and PMℓ are connected to each other in one of the following ways:
- Access switch: 2 hops for data transmission
- Aggregation switch: 4 hops for data transmission
- Core switch: 6 hops for data transmission.
The DC network can therefore be divided into 4 virtual zones, as described in Table 1. The basis for this segmentation is to monitor the positions of the PMs hosting dependent VMs. Figure 1 shows the coverage areas of each zone.
Figure 1: Sections covered by virtual regions in a tree-structured grid [1]
Table 1: Virtual regions of the DC network [1]
Zone | PMk &PMℓ | Hops |
Z0 | Same () | 0 |
Z1 | Under the access switch | 2 |
Z2 | Under the aggregation switch | 4 |
Z3 | Under the core switch | 6 |
For each connection from VMi located on PMk to VMj located on PMℓ, a connection is defined as. The volume of traffic between VMi and VMj and generally between any pair of VMs can be extracted from the TPM. An example of a TPM is shown in equation (3).
(3) |
|
In the matrix, all data are given in megabytes per second (MBps). TPM information can be obtained in various ways, for example, by checking the DC profile or by extracting the behavior of VMs using data mining techniques. Since this study refers to static VMP, a specific TPM is used for the calculations, whose elements are all extracted from the history of data exchanged between VMs and are known in advance. According to the previous definitions, the DT between VMi is based on PMk, and VMj based on PMℓ can be expressed as follows:
(4) |
|
where is calculated according to the virtual region covering PMk and PMℓ.
2-4- Model of the energy consumption of switches
The energy consumption model for switches presented in this section can be used in all tree topologies, considering the communication structure of the connections. If the communication data are located on a server, the ToR switch does not need to be switched on. In this particular case, all network switches are therefore idle. Note that the inactive switches in the DC are not switched off because restarting and configuring them is time-consuming. In DCs that should be constantly ready to receive and process VMs, this waste of time is not logical. The switches consume little energy when idle and in a half-lit state, which we ignore. If the communication data are in zone 1 (Figure 1), step 2 is activated. Communication data in zone 2 requires 4 steps and three switches to be switched on. Similarly, communication data in zone 3, which must pass through the core switch, requires 6 steps and five switches to be switched on. Therefore, the number of switches required to establish a connection between the data can be calculated from equation Eq. (5):