Profit-aware application placement for integrated Fog–Cloud computing environments

https://doi.org/10.1016/j.jpdc.2019.10.001Get rights and content

Highlights

  • An application placement policy that enhances profit of Fog–Cloud service providers.

  • A pricing model for computing instances of integrated Fog–Cloud environments.

  • A compensation strategy to support user’s interests during SLA violations.

Abstract

The marketplace for Internet of Things (IoT)-enabled smart systems is rapidly expanding. The integration of Fog and Cloud paradigm aims at harnessing both edge device and remote datacentre-based computing resources to meet Quality of Service (QoS) requirements of these smart systems. Due to lack of instance pricing and revenue maximizing techniques, it becomes difficult for service providers to make comprehensive profit from such integration. This problem further intensifies when associated expenses and allowances are charged from the revenue. Conversely, the rigid revenue maximizing intention of providers affects user’s budget and system’s service quality. To address these issues, we propose a profit-aware application placement policy for integrated Fog–Cloud environments. It is formulated using constraint Integer Linear Programming model that simultaneously enhances profit and ensures QoS during application placement on computing instances. Furthermore, it provides compensation to users for any violation of Service Level Agreement (SLA) and sets the price of instances according to their ability of reducing service delivery time. The performance of proposed policy is evaluated in a simulated Fog–Cloud environment using iFogSim and the results demonstrate that it outperforms other placement policies in concurrently increasing provider’s profit and user’s QoS satisfaction rate.

Introduction

The Internet of Things (IoT) paradigm interconnects numerous devices through Internet to collect and share data from physical environments. By using existing Cloud-centric IoT models, the computational demand of different IoT-enabled systems such as smart city and healthcare can be met [15]. However, execution of their latency-sensitive applications at remote Cloud datacentres can decrease the service quality and excessive dataflow towards the datacentres can congest the network [3]. To overcome such limitations and deal with large number of IoT devices at the edge network, Fog computing is introduced. The computing components within Fog such as Raspberry Pi devices, personal computers, network routers, switches and micro datacentres, commonly known as Fog nodes, are heterogeneous and distributed. They offer infrastructure services to host and develop IoT-applications, and process data closer to sources [10]. Thus, Fog computing facilitates reduced application service time and network congestion for different IoT-enabled systems compared to that scenario when IoT-data is solely processed by remote Cloud datacentres [9], [27].

Fog nodes have less computational capabilities than Cloud datacentres that resist accommodation of every IoT application at the edge level [29]. Therefore, different Cloud providers such as Amazon, Microsoft and Google initiate integrating Fog and Cloud infrastructure to offer extensive placement options for IoT applications [6]. The inclusion of Fog computing to current Cloud-centric IoT model is expected to add US$ 203.48 million more in their combined marketplace by 2022 [20]. It will also increase the operational cost in computing environments for consuming additional energy, deploying Fog infrastructure and utilizing more network bandwidth [14]. In this case, without revenue maximization, it will be difficult for providers to make profit from integrated environments. Contrariwise, firm intention of maximizing revenue often instigates providers to compromise application Quality of Service (QoS) that increases Service Level Agreement (SLA) violations. The imprecise price of Fog instances that is set for revenue maximization can also add overhead to user’s budget [4]. Hence, it becomes challenging to enhance provider’s profit in integrated environments as it urges to make a balance among expectations of users, expenses of providers and performance of Fog–Cloud infrastructure. Failure to ensure such balance inhibits providers and users to realize the potential of integrated computation [23].

In integrated environments, placement of applications on suitable instances is very crucial to enhance profit of providers and meet application QoS for users. Although different application placement policies for Fog computing are proposed prioritizing deadline, completion time and revenue [13], [32], [37], it is critical for these policies to attain the aforementioned objectives individually for integrated environment. Diversified affordability level of users, uneven expenses of operating heterogeneous instances and commitment of providing compensation to users for service failure further intensify the complexity of such application placement problem [28]. Therefore, it is demanding to develop an application placement policy for integrated Fog–Cloud computing environments that can comply with their economic and performance-based attributes simultaneously.

In Internet economics, providers are encouraged to charge users more for improved services [17]. Since Fog instances upgrade application service delivery time, it creates a scope for providers to charge users an extra amount for these instances on top of their actual Cloud-based price. To users, providers can advertise this additional charge as the price for extending the instance from Cloud to Fog infrastructure. However, it should be justified with the scale of performance improvement and user budget constraint. It is also required for clarifying the impreciseness of instance pricing and assisting users to identify how much they need to pay for executing applications in Fog. Additionally, to attain loyalty, providers can offer compensation to users on SLA violations. With such instance pricing model and compensation method, an application placement policy in integrated environments can boost the revenue and arouse the necessity of meeting application QoS that will consequently enhance provider’s profit. However, in existing works such policy has not been explored yet. Therefore, we propose a profit-aware application placement policy for integrated Fog–Cloud environments that increases revenue and reduces their number of failures in meeting application’s service delivery deadline. It also sets price of Fog instances in accordance with their capabilities of improving service quality and provides compensation to users based on SLA violation rate of computing environments.

The major contributions of this paper are:

  • Proposes an application placement policy for integrated Fog–Cloud environments based on an Integer Linear Programming (ILP) model that enhances provider’s profit and meets application’s QoS simultaneously.

  • Presents a pricing model for Fog instances which increases provider’s revenue by incorporating their Cloud-based pricing with the service delivery time improvement ratio of applications placed on those instances.

  • Develops a user compensation method that supports both user’s and provider’s interest through inverse relationship between compensation amount and performance of the computing environments in observing SLA requirements.

  • Demonstrates the performance of proposed policy in enhancing profit, satisfying QoS and managing waiting time via simulation on iFogSim [16] and compares them with the outcomes of existing policies.

The rest of the paper is organized as follows. Section 2 highlights several relevant works form literature. Section 3 provides the architecture of integrated environments along with revenue estimation, pricing model and compensation method. The proposed application placement policy and its illustrative example are presented in Sections 4 Profit-aware application placement, 5 Illustrative example respectively. Section 6 presents the simulation environment and performance evaluation of the proposed policy. Finally, Section 7 concludes the paper.

Section snippets

Related work

Provider’s profit and cost maintenance have already been studied extensively in Cloud computing [24], [25]. However, Fog computing is different from Cloud as it is more distributed and composed of numerous resource-constrained and heterogeneous Fog nodes. Service expectations of users from Fog-based applications, their anticipated run-time and budget for execution are also diversified compared to that of Cloud-based applications. Therefore, it is very complicated to develop interoperable

Features of integrated Fog–Cloud environments

As a supplement to IoT, Fog computing executes latency-sensitive applications in proximate of data sources to offer services in real time. Conversely, as an extension of Cloud computing, Fog conducts IoT-data pre-processing so that communication and computation overhead from Cloud datacentres can be reduced. Thus, Fog computing maintains an intermediate layer between IoT and Cloud computing [32], [43]. Based on this concept, the Computing Platforms for IoT applications are considered to be

Problem formulation

According to Eq. (11), provider’s Net profit P from a Computing Platform enhances if the Gross profit ϒ per billing period increases and the amount of compensation ρ decreases. To support these conditions during placement rounds, the proposed Profit-aware Application Placement policy prioritizes each application rRk in terms of estimated Gross profit ecr for execution on any instance cCk and latency sensitivity index ξr. On current time stamp τ, ξr refers to the remaining time from

Illustrative example

To numerically illustrate the basic steps of proposed profit-aware application placement policy, we have considered an integrated Fog–Cloud environment as depicted in Fig. 2.

Here, the Computing Platform offers 4 instances: two instances (ins#1, ins#2) are extended from Cloud to Fog part and two instances (ins#3, ins#4) remain at Cloud part. Configuration of the instances are summarized in Table 3. Here, Kilo byte per second (KB/s) and Thousand instruction per second (TI/s) refers to the unit of

Performance evaluation

In this section, performance of the proposed profit-aware application placement policy is compared with the basic concept of Completion time [32], Deadline [13] and Revenue-prioritized placement policies [37]. In Deadline-prioritized placement policy, deadline-critical applications are placed on computationally powerful instances in higher precedence whereas in Revenue-prioritized placement policy, it is done for economically beneficial applications. Alternatively, through Completion

Conclusions and future work

A profit-aware application placement policy for integrated Fog–Cloud environments is proposed in this work. The policy simultaneously increases provider’s Gross and Net profit by placing applications on suitable instances without violating their deadline constraint. It incorporates a pricing model that tunes the service charge of Fog instances according to their capability of reducing application service delivery time. The policy follows a compensation method to mitigate the effect of SLA

Declaration of Competing Interest

No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.jpdc.2019.10.001.

Redowan Mahmud is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, Department of Computing and Information Systems, the University of Melbourne, Australia and awarded Melbourne International Research Scholarship (MIRS) for supporting his studies. He received B.Sc. degree in 2015 from Department of Computer Science and Engineering, University of Dhaka, Bangladesh. His research interests include Internet of Things, Fog and Mobile Cloud Computing.

References (44)

  • Al-khafajiyM. et al.

    Fog computing framework for Internet of Things applications

  • AllI.F.

    The big three make a play for the fog

    (2018)
  • AusielloG. et al.

    Complexity and Approximation: Combinatorial Optimization Problems and their Approximability Properties

    (2012)
  • BarbosaK. et al.

    Performance-based compensation vs. guaranteed compensation: contractual incentives and performance in the Brazilian banking industry

    Econ. Apl.

    (2014)
  • BittencourtL.F. et al.

    Mobility-aware application scheduling in fog computing

    IEEE Cloud Comput.

    (2017)
  • BonomiF. et al.

    Fog computing and its role in the Internet of Things

  • CoughlanA.T. et al.

    26 sales force compensation: research insights and research potential

  • DengR. et al.

    Optimal workload allocation in Fog-Cloud computing toward balanced delay and power consumption

    IEEE Internet Things J.

    (2016)
  • GuL. et al.

    Cost efficient resource management in fog computing supported medical cyber-physical system

    IEEE Trans. Emerg. Top. Comput.

    (2017)
  • GuptaH. et al.

    Ifogsim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, edge and fog computing environments

    Softw. - Pract. Exp.

    (2017)
  • HeL. et al.

    Pricing and revenue sharing strategies for internet service providers

  • HogueN.

    Service Level Agreements with Penalty Clause

    (2009)
  • Cited by (106)

    View all citing articles on Scopus

    Redowan Mahmud is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, Department of Computing and Information Systems, the University of Melbourne, Australia and awarded Melbourne International Research Scholarship (MIRS) for supporting his studies. He received B.Sc. degree in 2015 from Department of Computer Science and Engineering, University of Dhaka, Bangladesh. His research interests include Internet of Things, Fog and Mobile Cloud Computing.

    Satish Narayana Srirama is a Research Professor and the head of the Mobile and Cloud Lab at the Institute of Computer Science, University of Tartu, Estonia. He received PhD in computer science from RWTH Aachen University, Germany, in 2008. His research focuses on cloud computing, mobile cloud, Internet of Things, fog computing, migrating scientific and enterprise applications to the cloud and large scale data analytics on the cloud. He is an Editor of Wiley Software: Practice and Experience.

    Kotagiri Ramamohanarao received Ph.D. from Monash University. He was awarded the Alexander von Humboldt Fellowship. He has been at the University of Melbourne since 1980 and was appointed as a professor in computer science in 1989. He was the Head of Computer Science and Software Engineering and Head of the School of Electrical Engineering and Computer Science, University of Melbourne. He is on the editorial boards for Universal Computer Science and Data Mining, IEETKDE and VLDB Journal.

    Rajkumar Buyya is a Redmond Barry Distinguished Professor and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, University of Melbourne. He is one of the highly cited authors in computer science and software engineering. Microsoft Academic Search Index ranked him as the world’s top author in distributed and parallel computing during 2007–2012. He was founding Editor of the IEEE Transaction on Cloud Computing and is an Editor of Wiley Software: Practice and Experience.

    View full text