Profit-aware application placement for integrated Fog–Cloud computing environments
Introduction
The Internet of Things (IoT) paradigm interconnects numerous devices through Internet to collect and share data from physical environments. By using existing Cloud-centric IoT models, the computational demand of different IoT-enabled systems such as smart city and healthcare can be met [15]. However, execution of their latency-sensitive applications at remote Cloud datacentres can decrease the service quality and excessive dataflow towards the datacentres can congest the network [3]. To overcome such limitations and deal with large number of IoT devices at the edge network, Fog computing is introduced. The computing components within Fog such as Raspberry Pi devices, personal computers, network routers, switches and micro datacentres, commonly known as Fog nodes, are heterogeneous and distributed. They offer infrastructure services to host and develop IoT-applications, and process data closer to sources [10]. Thus, Fog computing facilitates reduced application service time and network congestion for different IoT-enabled systems compared to that scenario when IoT-data is solely processed by remote Cloud datacentres [9], [27].
Fog nodes have less computational capabilities than Cloud datacentres that resist accommodation of every IoT application at the edge level [29]. Therefore, different Cloud providers such as Amazon, Microsoft and Google initiate integrating Fog and Cloud infrastructure to offer extensive placement options for IoT applications [6]. The inclusion of Fog computing to current Cloud-centric IoT model is expected to add US$ 203.48 million more in their combined marketplace by 2022 [20]. It will also increase the operational cost in computing environments for consuming additional energy, deploying Fog infrastructure and utilizing more network bandwidth [14]. In this case, without revenue maximization, it will be difficult for providers to make profit from integrated environments. Contrariwise, firm intention of maximizing revenue often instigates providers to compromise application Quality of Service (QoS) that increases Service Level Agreement (SLA) violations. The imprecise price of Fog instances that is set for revenue maximization can also add overhead to user’s budget [4]. Hence, it becomes challenging to enhance provider’s profit in integrated environments as it urges to make a balance among expectations of users, expenses of providers and performance of Fog–Cloud infrastructure. Failure to ensure such balance inhibits providers and users to realize the potential of integrated computation [23].
In integrated environments, placement of applications on suitable instances is very crucial to enhance profit of providers and meet application QoS for users. Although different application placement policies for Fog computing are proposed prioritizing deadline, completion time and revenue [13], [32], [37], it is critical for these policies to attain the aforementioned objectives individually for integrated environment. Diversified affordability level of users, uneven expenses of operating heterogeneous instances and commitment of providing compensation to users for service failure further intensify the complexity of such application placement problem [28]. Therefore, it is demanding to develop an application placement policy for integrated Fog–Cloud computing environments that can comply with their economic and performance-based attributes simultaneously.
In Internet economics, providers are encouraged to charge users more for improved services [17]. Since Fog instances upgrade application service delivery time, it creates a scope for providers to charge users an extra amount for these instances on top of their actual Cloud-based price. To users, providers can advertise this additional charge as the price for extending the instance from Cloud to Fog infrastructure. However, it should be justified with the scale of performance improvement and user budget constraint. It is also required for clarifying the impreciseness of instance pricing and assisting users to identify how much they need to pay for executing applications in Fog. Additionally, to attain loyalty, providers can offer compensation to users on SLA violations. With such instance pricing model and compensation method, an application placement policy in integrated environments can boost the revenue and arouse the necessity of meeting application QoS that will consequently enhance provider’s profit. However, in existing works such policy has not been explored yet. Therefore, we propose a profit-aware application placement policy for integrated Fog–Cloud environments that increases revenue and reduces their number of failures in meeting application’s service delivery deadline. It also sets price of Fog instances in accordance with their capabilities of improving service quality and provides compensation to users based on SLA violation rate of computing environments.
The major contributions of this paper are:
- •
Proposes an application placement policy for integrated Fog–Cloud environments based on an Integer Linear Programming (ILP) model that enhances provider’s profit and meets application’s QoS simultaneously.
- •
Presents a pricing model for Fog instances which increases provider’s revenue by incorporating their Cloud-based pricing with the service delivery time improvement ratio of applications placed on those instances.
- •
Develops a user compensation method that supports both user’s and provider’s interest through inverse relationship between compensation amount and performance of the computing environments in observing SLA requirements.
- •
Demonstrates the performance of proposed policy in enhancing profit, satisfying QoS and managing waiting time via simulation on iFogSim [16] and compares them with the outcomes of existing policies.
The rest of the paper is organized as follows. Section 2 highlights several relevant works form literature. Section 3 provides the architecture of integrated environments along with revenue estimation, pricing model and compensation method. The proposed application placement policy and its illustrative example are presented in Sections 4 Profit-aware application placement, 5 Illustrative example respectively. Section 6 presents the simulation environment and performance evaluation of the proposed policy. Finally, Section 7 concludes the paper.
Section snippets
Related work
Provider’s profit and cost maintenance have already been studied extensively in Cloud computing [24], [25]. However, Fog computing is different from Cloud as it is more distributed and composed of numerous resource-constrained and heterogeneous Fog nodes. Service expectations of users from Fog-based applications, their anticipated run-time and budget for execution are also diversified compared to that of Cloud-based applications. Therefore, it is very complicated to develop interoperable
Features of integrated Fog–Cloud environments
As a supplement to IoT, Fog computing executes latency-sensitive applications in proximate of data sources to offer services in real time. Conversely, as an extension of Cloud computing, Fog conducts IoT-data pre-processing so that communication and computation overhead from Cloud datacentres can be reduced. Thus, Fog computing maintains an intermediate layer between IoT and Cloud computing [32], [43]. Based on this concept, the Computing Platforms for IoT applications are considered to be
Problem formulation
According to Eq. (11), provider’s Net profit from a Computing Platform enhances if the Gross profit per billing period increases and the amount of compensation decreases. To support these conditions during placement rounds, the proposed Profit-aware Application Placement policy prioritizes each application in terms of estimated Gross profit for execution on any instance and latency sensitivity index . On current time stamp , refers to the remaining time from
Illustrative example
To numerically illustrate the basic steps of proposed profit-aware application placement policy, we have considered an integrated Fog–Cloud environment as depicted in Fig. 2.
Here, the Computing Platform offers instances: two instances (ins#1, ins#2) are extended from Cloud to Fog part and two instances (ins#3, ins#4) remain at Cloud part. Configuration of the instances are summarized in Table 3. Here, Kilo byte per second (KB/s) and Thousand instruction per second (TI/s) refers to the unit of
Performance evaluation
In this section, performance of the proposed profit-aware application placement policy is compared with the basic concept of Completion time [32], Deadline [13] and Revenue-prioritized placement policies [37]. In Deadline-prioritized placement policy, deadline-critical applications are placed on computationally powerful instances in higher precedence whereas in Revenue-prioritized placement policy, it is done for economically beneficial applications. Alternatively, through Completion
Conclusions and future work
A profit-aware application placement policy for integrated Fog–Cloud environments is proposed in this work. The policy simultaneously increases provider’s Gross and Net profit by placing applications on suitable instances without violating their deadline constraint. It incorporates a pricing model that tunes the service charge of Fog instances according to their capability of reducing application service delivery time. The policy follows a compensation method to mitigate the effect of SLA
Declaration of Competing Interest
No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.jpdc.2019.10.001.
Redowan Mahmud is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, Department of Computing and Information Systems, the University of Melbourne, Australia and awarded Melbourne International Research Scholarship (MIRS) for supporting his studies. He received B.Sc. degree in 2015 from Department of Computer Science and Engineering, University of Dhaka, Bangladesh. His research interests include Internet of Things, Fog and Mobile Cloud Computing.
References (44)
- et al.
Deadline-aware task scheduling in a tiered IoT infrastructure
- et al.
Internet of Things (IoT): A vision, architectural elements, and future directions
Future Gener. Comput. Syst.
(2013) - et al.
Quality of experience (QoE)-aware placement of applications in Fog computing environments
J. Parallel Distrib. Comput.
(2019) - et al.
A dynamic tradeoff data processing framework for delay-sensitive applications in cloud of things systems
J. Parallel Distrib. Comput.
(2018) - et al.
A cybersecurity framework to identify malicious edge device in fog computing and cloud-of-things environments
Comput. Secur.
(2018) - et al.
Mobile-aware service function chain migration in cloud–fog computing
Future Gener. Comput. Syst.
(2019) - et al.
Fog computing micro datacenter based dynamic resource estimation and pricing model for IoT
SCIP: solving constraint integer programs
Math. Program. Comput.
(2009)- M. Afrin, M.R. Mahmud, M.A. Razzaque, Real time detection of speed breakers and warning system for on-road drivers, in:...
- et al.
Tradeoff between user quality-of-experience and service provider profit in 5G cloud radio access network
Sustainability
(2017)
Fog computing framework for Internet of Things applications
The big three make a play for the fog
Complexity and Approximation: Combinatorial Optimization Problems and their Approximability Properties
Performance-based compensation vs. guaranteed compensation: contractual incentives and performance in the Brazilian banking industry
Econ. Apl.
Mobility-aware application scheduling in fog computing
IEEE Cloud Comput.
Fog computing and its role in the Internet of Things
26 sales force compensation: research insights and research potential
Optimal workload allocation in Fog-Cloud computing toward balanced delay and power consumption
IEEE Internet Things J.
Cost efficient resource management in fog computing supported medical cyber-physical system
IEEE Trans. Emerg. Top. Comput.
Ifogsim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, edge and fog computing environments
Softw. - Pract. Exp.
Pricing and revenue sharing strategies for internet service providers
Service Level Agreements with Penalty Clause
Cited by (106)
Sustainable computing across datacenters: A review of enabling models and techniques
2024, Computer Science ReviewEnergy-latency trade-off analysis for scientific workflow in cloud environments: The role of processor utilization ratio and mean grey wolf optimizer
2024, Engineering Science and Technology, an International JournalDCSP: A delay and cost-aware service placement and load distribution algorithm for IoT-based fog networks
2024, Computer CommunicationsScheduling independent tasks on multiple cloud-assisted edge servers with energy constraint
2024, Journal of Parallel and Distributed ComputingApplication of Quantum Particle Swarm Optimization for task scheduling in Device-Edge-Cloud Cooperative Computing
2023, Engineering Applications of Artificial Intelligence
Redowan Mahmud is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, Department of Computing and Information Systems, the University of Melbourne, Australia and awarded Melbourne International Research Scholarship (MIRS) for supporting his studies. He received B.Sc. degree in 2015 from Department of Computer Science and Engineering, University of Dhaka, Bangladesh. His research interests include Internet of Things, Fog and Mobile Cloud Computing.
Satish Narayana Srirama is a Research Professor and the head of the Mobile and Cloud Lab at the Institute of Computer Science, University of Tartu, Estonia. He received PhD in computer science from RWTH Aachen University, Germany, in 2008. His research focuses on cloud computing, mobile cloud, Internet of Things, fog computing, migrating scientific and enterprise applications to the cloud and large scale data analytics on the cloud. He is an Editor of Wiley Software: Practice and Experience.
Kotagiri Ramamohanarao received Ph.D. from Monash University. He was awarded the Alexander von Humboldt Fellowship. He has been at the University of Melbourne since 1980 and was appointed as a professor in computer science in 1989. He was the Head of Computer Science and Software Engineering and Head of the School of Electrical Engineering and Computer Science, University of Melbourne. He is on the editorial boards for Universal Computer Science and Data Mining, IEETKDE and VLDB Journal.
Rajkumar Buyya is a Redmond Barry Distinguished Professor and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, University of Melbourne. He is one of the highly cited authors in computer science and software engineering. Microsoft Academic Search Index ranked him as the world’s top author in distributed and parallel computing during 2007–2012. He was founding Editor of the IEEE Transaction on Cloud Computing and is an Editor of Wiley Software: Practice and Experience.