Abstract

The projected rise in wireless communication traffic has necessitated the advancement of energy-efficient (EE) techniques for the design of wireless communication systems, given the high operating costs of conventional wireless cellular networks, and the scarcity of energy resources in low-power applications. The objective of this paper is to examine the paradigm shifts in EE approaches in recent times by reviewing traditional approaches to EE, analyzing recent trends, and identifying future challenges and opportunities. Considering the current energy concerns, nodes in emerging wireless networks range from limited-energy nodes (LENs) to high-energy nodes (HENs) with entirely different constraints in either case. In view of these extremes, this paper examines the principles behind energy-efficient wireless communication network design. We then present a broad taxonomy that tracks the areas of impact of these techniques in the network. We specifically discuss the preponderance of prediction-based energy-efficient techniques and their limits, and then discuss the trends in renewable energy supply systems for future networks. Finally, we recommend more context-specific energy-efficient research efforts and cross-vendor collaborations to push the frontiers of energy efficiency in the design of wireless communication networks.

1. Introduction

Wireless communication networks (WCNs) afford much flexibility and ease of deployment, and thus are predominant in mobile and pervasive applications. WCNs are crucial in the realization of all-pervasive network concepts such as the Internet of things (IoT) [1] and a more inclusive Internet of Everything (IoE) [2]. Going by the concept of IoE and the IoT, it is evident that objects to the tune of everything in our world and people would be connected through appropriate processes [2], and a massive amount of data—big data—would be generated by this interconnection. Networks that would drive the IoE are required to be ubiquitous in coverage, with the capacity to support a tremendous number and a heterogeneous variety of network devices, data, and protocols for people-to-people (P2P), machine-to-machine (M2M), and people-to-machine (P2M) communication [3]. Emerging wireless networks are already treading this path and are expected to grow in the coming years [4, 5].

In the wake of the observed upscaling trend, research into innovative ways to mitigate energy usage has become crucial due to two principal reasons. First, in several pervasively growing applications, energy replenishment is severely limited, constricting the allowable amount of energy loss due to inefficiency: the more the energy is lost to inefficiency, the less it is available for network activities. Secondly, conventional wireless mobile networks, which subsume technologies for future ubiquitous wireless coverage, are designed to scale in power consumption to match scaling traffic [6]. This implies that operating costs could be enormously impracticable as the network scales. These concerns have pushed energy-efficient (EE) design techniques to the center of future wireless systems design.

Several EE surveys for wireless networks which exist in recent literature focus on either one or more aspects of wireless networks, on specific applications, or specific energy-efficient techniques. Cao et al. [7], for example, give a survey focused on big-data-based energy-efficient technologies in both high- and low-rate networks, and Anastasi et al. [8] focus on wireless sensor networks (WSNs) EE. Recently Khan et al. [9] and Buzzi et al. [10] present surveys on EE in WSNs and 5G, respectively. In the same way, reviews on trade-off mechanisms [11], efficient routing processes [12], and prediction-based data reduction [13] have been reported in recent times. In this paper, we seek to examine broad energy-efficient techniques in a top-down design approach and discuss the trends in EE designs.

Emerging wireless systems vary widely in application [14], with more and more application cases envisaged to emerge in the future. This sheer diversity makes typical EE approaches difficult. It could be observed, however, that the constraints in limited-energy nodes and high-energy nodes, apparent in the open literature, create energy-defined extremes such that EE approaches in these cases would subsume efficient techniques in other applications as shown in Table 1. We examine these application cases and examine the emerging design principles for EE. A systematic and holistic approach to EE that subsumes these scenarios would give a broad view of the options and opportunities for EE in an increasingly heterogeneous network that is expected to accommodate new networks in emerging WCN designs.

Limited-energy nodes (henceforth referred to as LENs) are designed for applications in environments where some form of energy replenishment is severely limited [9]. Such nodes are powered by energy harvesting (EH) [15], which are generally intermittent, batteries which are capacity limited, or both. Since network nodes are typically intended to last long, EE becomes a crucial factor in the development of such nodes. A low network efficiency shortens the network life for battery-powered applications and throttles the power available for energy-harvested systems. The WSN technology presents a classic paradigm of this classification and is expected to continue its pervasive growth pattern to drive the IoT space in future [1]. Base stations (BS), on the other hand, have been identified to be the largest energy consumer in higher-energy systems (denoted as high-energy nodes, HENs in this paper) [16] and are envisaged to grow in number to provide ubiquitous wireless coverage [3, 4]. They are typically high-traffic nodes that are always on and hence, coupled with adequate cooling. In future networks, these two rather contrasting extremes have attracted much attention for EE in the open literature.

In this paper, we note two ways to approach energy-efficient techniques—at the component level and at the system level [11]. At the component level, every element of the wireless network is optimized, while system-level techniques focus on the optimization of the communication processes between the nodes. Data management, as an EE approach, is further gaining attention due to increasing data volume, variety, velocity, and value. In the next section, we survey optimization frameworks in literature and present a taxonomy on the underlying principles for energy efficiency. An outline of the paper is given in Figure 1. We further evaluate an essential aspect of emerging networks of the future—their dependence on some sort of predictive approach—a definite trend in emerging communication networks that is expected to continue in the foreseeable future. These approaches are typically data intensive and thus computationally expensive, threatening the limits of efficient operation. We develop a framework to evaluate their implementation in existing networks. This framework can also be adapted for the evaluation of their cloud-based counterpart techniques, which require much transmit power. We conclude identifying opportunities and future directions in EE for wireless systems.

First, we survey energy-efficient techniques in the network with an aim to circumspectly trace the areas of impact of these techniques in the design and operation of the network. To do this, we divide energy-efficient techniques into three: energy-efficient node design and operations (component level), efficient node-to-node interaction, and efficient node data management (system level). These approaches are considered in the following subsections.

2.1. Efficient Node Design and Operation

Nodes typically consist of a power supply module, transceiver module, and processing module [14, 17] that may or may not require human intervention to communicate. They may further include other components, depending on their application [14]. Each component is crucial as individual component inefficiencies accumulate up to overall node inefficiency, and by extension, total network inefficiency. Sometimes, inefficient low-quality hardware components may require compensations to correct performance variations, which would further incur processing power [18]. Hence, a component design optimization approach is a primary step to improving EE.

More so, in node operation, variations in data traffic and node position relative to a source or destination make the demand for network activities time varying and position specific. Superfluous network activities—when and to where there is no need for signals—use power in excess than necessary and account for substantial energy losses in a conventional node operation. We discuss an efficient node design, as illustrated in Figure 2, in the next subsection and a smart node operation, illustrated in Figure 3, in the following subsection.

2.1.1. Efficient Component Design

The primary components of a node include the processing module and the transceiver module. Logic devices for computation are generally complementary metal-oxide-semiconductor- (CMOS-) based due to lower static power consumption. The processors handle all associated computing tasks dictated through programs, typically stored on a memory device. A node could house more than one processor for different tasks, which may include modulation, analog-to-digital conversion, filtering, and other specific tasks.

The power dissipated on a CMOS chip is generally categorized as static power or dynamic power. Static power is due to short circuit, bias, and leakage currents [19], which is increasingly becoming significant as more transistors are integrated on a chip. Dynamic power per unit time, by a chip is a function of the capacitance being charged or discharged, ; the voltage swing, ; the activity weighting, , which is the corresponding probability that a transition occurs; and the switching frequency, ; as given in Equation (1) [20].

Logic devices further heat up during switching and may require cooling to check temperature-induced short-term inefficiencies and long-term defects, thus further increasing operating power. The goal of an efficient processor design is to reduce the energy used up for computation, and hence for cooling. The concerns for high-rate processors are slightly different from those for low-rate processors.

Generally, many processing devices are synchronous logic systems which employ clocks for chip-wide synchronization. Typically, the clock signal is routed to different parts of the chip using buffers, requiring significant energy in large chips. Clock-related power is thus a considerable component of dynamic power consumption: more than a quarter of the power dissipated in a typical high-performance processor is for synchronization [21]. Most conventional energy reduction techniques seek to achieve efficient designs that minimize clock activity, maximize clock performance, or eliminate clock power.

Generally, methods for EE in logic design transcends the system level, register-transfer level, logic level, and the circuit level to decrease , ,, or static power. These technologies are listed in Table 2. It is crucial to note that power consumption and performance are conflicting objectives in processor design. Reducing power consumption only saves energy if the time required to accomplish the task does not compensate for saved power. Performance per watt is an important EE metric for logic devices.

In LENs, processors are deliberately underclocked to use less power at the expense of performance. Also, in EH applications, volatile processors are undesired, as they would be inefficient in application cases where the primary energy supply is uncontrollable. The intermittency of such power supply would necessitate backup/reinitialization schemes for computational accuracy, which could be so recurrent as to impact forward progress and incur significant energy overhead [41]. Approximation-based computing in applications that are amenable to approximation [42], nonvolatile Ferroelectric random-access memory [43], resistive random-access memory [44], magnetic random-access memory [45], negative capacitance field-effect transistors [46], and ferroelectric field-effect transistors [47] – based processing are alternatives currently being explored for LENs applications. Emerging technologies are further exploring nonvolatile spintronic-based processors that use an electron spin state rather than capacitive switching [48]. An interesting report on a nonvolatile design for energy-harvested applications is given in [49].

There is a vital association between hardware architecture, operating system, and applications and between different components within a node, which can be exploited to improve EE. Energy-efficient techniques could be either flexibly or strictly implemented in either software or hardware level. We refer the interested reader to [50] for detailed hardware-level energy-efficient techniques and to [51] for approaches to EE at the software level.

Transceivers serve to transmit and receive data over a wireless medium. Wireless links could use radiowaves, microwaves, infrared, or visible light for communication. Radio frequency (RF) transceivers are common in wireless systems, with non-RF transceivers expected to increase in specific deployment scenarios as in indoor applications in 5G deployment and beyond [4]. RF transceivers typically contain an uplink converter and power amplifier for transmission and a low-noise amplifier and a downlink converter for reception. They are coupled to antennas using bandpass filters and to a baseband modem which comprises chipsets for several analog or digital modulation and analog to digital or digital to analog conversion.

The transceiver is the most important energy-consuming component of wireless nodes [17]. A key metric for measuring the energy efficiency of wireless transceivers is energy per bit, which represents the average amount of energy required to transmit or receive a single bit of data [52]. Recent trends have focused on efficient modulation techniques [5355] and beam forming using an antenna array for directing radiation and improving directional antenna gain [56, 57]. Directional antennas are critical to power management as they allow radiation to be directed to where it is needed.

For power supply, energy harvesting (EH) and batteries are an essential component in LENs. A report on the EE of far-field wireless power transfer is presented in [58]. Batteries are critical in battery-based LENs as they determine node and network life, even when energy harvesting techniques are integrated [59]. Batteries have been found to live less than their predicted lifetime because predictions do not take discharge current, temperature, application duty cycle, and other factors into account. Inefficient networks are characterized by high discharge current and higher discharge-recharge cycle rates when energy harvesting techniques are integrated, which can impair battery function. Models for batteries in LENs achieved to practical accuracy would give insights that would be assimilated into the network design for optimal node function. All other components in the node must, in the same way, be designed with EE in mind to consolidate EE in the node.

2.1.2. Efficient Operation

Network components cannot afford to be always on, because they are always not required to be in use, except in uninterrupted monitoring applications. Efficient operations at node level and component level, illustrated in Figure 3, are discussed in the following subsections. It is necessary to state that some network-dictated adaptive processes that depend on network-wide traffic information are discussed under efficient node-to-node interaction (Section 2.3) and are excluded from this classification. Here, we consider how the node manages its power based on traffic demand peculiar to it.

(1) Node Sleep. Whole nodes going to sleep when they are inactive have been shown to reduce inefficiencies [60]. They are either awaken using a passive wake up receiver [61] or are scheduled to come alive at some time. In scheduled MAC-based systems, nodes are given a specific time slot for communication, after which they are allowed to sleep, to further become active at their time slot [62, 63]. To save energy, wake up receivers can run on a low-duty-cycle mode, where it is scheduled between on and off states. The radio then wakes up the node only when communication is necessary. Scheduled-based MAC protocols and passive wake up radio are presented in [63].

(2) Adaptive Component Operation. While nodes are awake, they can efficiently manage their activities to conserve energy. Dynamic power management (DPM) and dynamic voltage and frequency scaling (DVFS) are popular approaches to logic device optimization [64]. DPM puts the processor in sleep mode to save power when there is no need for computation while DVFS is employed to find optimal voltage and frequency from some discrete frequency and voltage settings based on load requirements. Other techniques include race-to-dark (RTD) for logic with high leakage currents to execute tasks as fast as possible so that the processor can be put into a sleep mode, minimizing leakage current [65]; adaptive voltage scaling (AVS) that adapts the supply voltage for a processor allowing it to operate at the minimum possible voltage for a given performance; and power gating. These techniques build on efficient techniques incorporated at the design stage.

At the transceiver, traffic variations are exploited for controlling antenna power. A foremost scheme is the transmit power control, which reduces the power of a radio transmitter to the lowest required to maintain the link given a required QoS. Discontinuous transmission (DTX) and discontinuous reception (DRX) have been proposed to save energy in cellular networks [66]. Dissimilarities in traffic requirements are also used to exploit the different energy consumption of different radio access technologies (RATs) to reduce energy by efficiently balancing traffic among access technologies without compromising quality [67].

For other components, traffic variation is monitored to switch a component between the on and off states to manage power efficiently: for example, adaptive sensing can limit high-power sensor operation in event-triggered applications only when necessary, delegating continuous event-listening tasks to lower-power sensors. Intermittent on and off schemes might introduce latencies in operation, and the on-and-off cycle could be so recurrent as to impact network operation. In such cases, a prediction-based data analytic approach is favored.

2.2. Efficient Data Management

Communication networks are typically designed to convey data from one node to another. The volume of data is envisaged to keep rising, so is its variety. Generally, this rise would cause a significant increase in energy for preprocessing and transmitting. Redundant data generation, processing, and transmission would degrade network efficiency. More so, ingenious management of data is necessary to optimize the data-enforced energy increase. Generally, data reduction techniques are based on increasing data processing in order to limit data communication to only when necessary. A typical classification of efficient data management techniques is presented in Figure 4.

2.2.1. Data Reduction

The goal of data reduction techniques is to shrink the amount of data without adversely affecting application goals. In-network processing techniques are conventionally employed to reduce the amount of data that needs to be transmitted. One way to reduce data volume is through aggregation. Using these schemes, gathered data that render similar information are collectively represented by the information they imply. This is valuable in applications where data generated across the nodes is consistent, such as in event-driven and high node density applications [68]. An aggregator node intermediates between the source nodes and the sink node and aggregates similar data, to avoid redundant transmission. Network coding can be viewed as one of such aggregating schemes, involving the intermediate node generating new packets from several received packets to be decoded at the receiver, allowing algebraic algorithms to be applied to the data destined for a node to accumulate transmissions [69].

The source coding technique, also called data compression by some authors, is likewise employed to encode information using fewer bits. By reducing data sizes, maximizing transmit energy can be reduced [70, 71]. Source coding can be lossless—eliminating only statistical redundancy—or lossy—further discarding less critical information. Lossy coding techniques present a trade-off between bit rate and reproduction fidelity and are typical in severely constrained applications. Given a maximum allowed delay and complexity, the goal would be to achieve an optimal trade-off between bit rate and distortion [72].

2.2.2. Data Prediction

Predictive techniques have recently been proposed as data inferring and recovery techniques to further limit data transfer [53]. Predictive techniques create a model at either or both source and sink nodes to predict data streams, based on previously observed values. Provisions are made for the transmission of the difference between predicted and sensed values. Prediction can either be applied to infer node data from among a set of nodes—spatial—or estimate future values based on the historical data—temporal [73]. In [13], a systematic classification of predictive models for wireless sensor applications and a discourse on scheme selection based on WSNs’ constraints and monitored data is presented.

Data from the network can also be used to detect some regularity in network operation, predict their occurrence, and efficiently adapt network operation to cater for these events. Most energy-efficient techniques are leveraging network intelligence to achieve a more efficient result. With a lot of cognitive network-based EE applications proposed in literature, artificial intelligence is expected to play a crucial role in EE for future networks [74], including for efficient adaptive resource allocation, discontinuous reception [75], channel learning for power management [76, 77], traffic offloading for energy efficiency in small cells [78], node device authentication for security [79], and intermittent energy management for energy-harvested applications [80, 81]. We present a list of prediction-based techniques for WCNs in Table 3. Predictive approaches are particularly heavy on processors, inciting the question of how much processing can a network tolerate? In application, the amount of computational power that might be introduced with these predictive approaches could be a limiting factor.

2.2.3. Efficient Storage

Local caching optimizes content delivery networks by storing frequently accessed data locally and avoids routing such data every time it is requested. Storage points could be efficiently distributed across the network to manage high-demand content appropriately. Energy saved could be immense for high-demand content of significant volume. In [7], a comprehensive survey of data management techniques, including local caching, is given. This technique is only applicable to networks with uniquely identifiable content in high demand.

2.3. Efficient Node-to-Node Interactions

Internodal interaction denotes how nodes are arranged to communicate, and the methods through which communication occurs. The mode of interaction in a network mirrors its energy usage pattern at participant nodes. Therefore, optimizing internodal communication is vital to reduce node energy consumption. Efficient internodal interaction methods range from the architecture design and dynamic architecture control to energy-efficient routing and the use of an adaptable proximate medium. These techniques are shown in Figure 5 and are briefly discussed in the next subsection.

2.3.1. Efficient and Adaptive Architecture

The need to structure the network in a way that optimizes energy has made contemporary research focus on the hierarchical structure, which has the potentials to optimize energy as it brings the node closer to a gateway, saving transmission energy [112]. The structure establishes cluster heads whose responsibilities of coordination and forwarding data mean that they would consume more energy. In HENs (e.g., cellular networks), the individual nodes have less energy constraints so that a hierarchical approach can be liberally exploited. An increase in the number of lightly loaded small cells, however, degrades EE [112]. The decisive deployment of relay nodes could further optimize network energy in LENs (e.g., sensor networks), where network connectivity depends largely on the proximate distribution of nodes.

A hierarchical architecture implements several levels of network coverage that can be separately and judiciously controlled for efficiency: with an overbearing macrocell for coverage, smaller cells can be efficiently zoomed out or in [113], putting neighboring cells to sleep or limiting radiation energy to a small area for network service, respectively. Device-to-device communication can be implemented, offloading traffic from the BS [114, 115]. In applications where both coverage and connectivity is dependent on nodes in proximity, as in LENs, nodes can be selectively put to sleep without significantly impacting connectivity and coverage, a scheme known as topology control [116118]. Other offloading techniques in LENs include cluster head selection [119, 120], which balances energy by switching the cluster head responsibility among participant nodes, and mobile gateways and relay nodes proposed to balance energy by reducing transmission energy through its mobility. This traffic offloading to neighboring cells is done in a fashion that conforms to optimal energy policy.

The sleep techniques discussed in this section differ from our discussion in Node Sleep under Section 2.1.2. In that section, we discussed efficient node operation due to traffic information available to each node and due to a scheduled process. Here, we understand that sleep can be directed by the network, based on network-wide traffic knowledge. This informs most EE routing protocol design.

Routing energy may cause concerns in LENs as it depends on individual nodes for coverage and connectivity. EE node interactions, in addition to maximizing node power, require energy balancing techniques among participant nodes [121, 122].

2.3.2. Transmission Routing Optimization and an Alternative Medium

Energy can further be optimized by specifying efficient routing techniques for node-to-node interaction. Multihop techniques place and use neighboring nodes as a medium to reach a farther node, improving EE, and have been widely proposed for LENs. Multipath routing allows for multiple routes across available nodes, reducing energy drain along a route [123, 124]; energy-efficient routing, based on the network topology and performance constraint, ensures that data routing is not insensitive to the energy consumption they compel [125]. Further reports on efficient routing are given in [12, 126, 127].

The use of a proximate alternative medium for data transmission could prove to be a ground-breaking technique to improve EE in future networks. Some examples include transmission over powerline in smart grid networks, instead of through traditional wireless links [128], and electro-quasistatic human body communication [129], which uses the human body as a medium for body sensor networks, significantly limiting cybersecurity concerns, as well as energy requirements.

Cybersecurity is a crucial internodal energy-efficient technique for EE in LENs. Energy is as precious in these applications as the data they convey. Using compromised nodes, malicious network users may attack wireless systems primarily to drain system energy [13, 130] or with some other motive, leaving an energy-sapped network in the aftermath. Hence, secure network techniques such as node identification and data security and privacy are essential energy-efficient techniques from this standpoint.

In summary, energy-efficient networks of the future are expected to harness a combination of all these techniques across the three levels—design and operation, data management, and architecture and routing—in a fashion as to fulfill their design objective. Networks deployed in high-accuracy applications, for example, in [131], would employ a different flavor of energy-efficient techniques from those deployed in noncritical applications. The same goes for all other applications.

3. Challenges and Opportunities

3.1. Prediction-Based Energy Efficiency Approach: Computation vs. Radiation Power Trade-Off

There is a preponderance of AI techniques in literature, which indicates a move towards cognitive communication. Processing would increase to leverage big data for a smarter, and more importantly, a more efficient operation. With predictive techniques especially heavy on processors, how much processing can a learning-oriented network allow?

For a network with nodes, we denote the following components: power due to transmission as and power used up in computation as . In evaluating the power pattern mirrored at each node due to a specific technique, we assign the circuit and transmit power consumption at the th node before the technique is applied as and , respectively. Prediction-based techniques tend to cause an increased with an objective to compel a decrease in . We thus further define to be the transmission power coefficient and as the computation power coefficient at each node after optimization, where for power increase and for power decrease. Before optimization, neglecting power consumption due to other components, the sum of the node power is , and the network power before optimization is , where .

After optimization, the power at each node changes to , and the network power would be the sum across all the nodes, given as . We introduce and to be the transmission power coefficient and computation power coefficient for the entire network, respectively.

Energy efficiency is a measure of both the energy consumed and some network performance index of interest. Denoting the network performance as , we could write network efficiency before optimization and network efficiency after optimization as given in Equations (2) and (3), respectively, where (network performance) is assumed to change by a factor after optimization. Network performance here could denote the achievable capacity, throughput, or outage capacity [10]. In general, efficiency before optimization must be less than after optimization as shown in Equation (4):

From Equation (4), it can be shown that for network optimization, where increases (), should be decreased () to remain within efficient for a given achieved gain (). The nearer the LHS of Equation (5) is to its RHS, the weaker the efficiency. To minimize the RHS of Equation (5), should be slightly above such that and should be minimized (). In summary, the ratio of circuit and transmission power, together with the expected measure of energy saved in transmission due to the application of a technique as well as possible network performance change limit how much processing power can be introduced.

Figures 68 show the limits of computation power coefficient () vis-à-vis the changes in network performance () and transmission power () if there is to be an improvement in network efficiency. The ideal case for maximum efficiency is at a minimum and (near zero) and maximum network performance. A rearrangement of Equation (5) gives the limit of as given in Equation (6).

If is much higher than , as Figure 6 illustrates, increasing the computation power coefficient () must cause an increased network performance coefficient () to improve system efficiency. Figures 7 and 8 illustrate what happens when is equal to, or much higher in relation to . An increase in (transmission power coefficient) due to optimization must be compensated by some rise in (network performance coefficient) to accommodate any rise in computation power (). Increasing the transmission power coefficient () also throttles the allowable increase in computation () that can be accommodated— decreases considerably as increases. This trend, though present in cases where , as shown in Figure 8, is negligible. Furthermore, for the same change in transmission power, and network performance, , there is a higher allowance for computation power increase, —about four times for (Figure 7) and over twenty times for (Figure 6) at and at . Hence, while networks with can accommodate more computation, and hence circuit power is over a broader range of change in and , those with can allow higher degrees of computation power increase over a narrower range. It is further necessary to comment that other network trade-offs abound in employing energy-efficient techniques [10] and authors in [11, 132] have provided useful reports on these trade-offs.

3.2. Energy Supply
3.2.1. Limited-Energy Nodes (LENs)

EH techniques are typically intermittent and uncontrollable, limiting their application to noncritical cases. Radio signals used to carry energy over the air presents a controllable energy replenishing opportunity, making the concept of simultaneous wireless information and power transfer (SWIPT) a trend [133135]. The ratio of device output power to the input power gives the harvesting efficiency. Hence, it would be more efficient if a higher percentage of ambient energy is converted for node use. Due to limitations in ambient EH, energy balancing techniques have been developed to balance the energy equally among all nodes so that all nodes are either equally alive or not [121]. However, we note that with the recent advancements in EH techniques, there are possibilities that energy balancing might no longer be required for applications where ambient energy conversion rate exceeds or balances energy depletion rates. Shaikh and Zeadally [15] discuss a comprehensive catalog of EH techniques in LENs.

3.2.2. Base Stations

Base stations are generally expected to remain an essential part of a ubiquitous wireless system in the future. With traditional BS as principal energy consumers in cellular networks at almost 60% [16], energy supply for future base station is of particular interest. A multiple-tier network structure is anticipated in the imminent 5G-coverage tiers, which operate at lower frequencies than 6 GHz majorly to penetrate barriers and provide wide coverage (macrocells), and hotspot tiers, which are primarily deployed to places with increased user density and could save power loss in wall penetration [136] when deployed with an outdoor relay.

A power model has been developed for these future base stations, and their energy usage estimated [17, 137]. The network structure in 5G can be expected to be consistent with subsequent cellular technologies, which are already in development [4, 5], changing only slightly, if at all. The power industry is also undergoing a gradual change from carbon-intensive to renewable energy sources at a faster rate than historical transitions might suggest [138]. It is anticipated that with the impending change, more renewable energy sources would be integrated into the distribution network with solar panels envisaged to be the most relevant option for energy supply globally, due to its reducing costs [139]. Hence, colocated energy harvesting, especially for macrocells, can present new opportunities for cellular network operators.

4. Future Directions

Two broad areas of future research are suggested based on this survey. These directions—energy-efficient machine learning methods and context-specific designs—are outlined in Table 4. A major drawback with the adoption of a supervised learning approach, such as those presented in [84, 85, 93, 94, 97], is the requirement of training data which, for some applications, might be cumbersome. Furthermore, the performance of machine learning methods, though excellent, is not error free. This casts doubts in their application in error-sensitive tasks, such as in device authentication, presented in [79, 84, 105]. The apparent alternative would be to augment these techniques to improve their performance, but the aggregated increase in energy usage might be significantly above efficient limits. Prediction-based techniques, in application to error-sensitive contexts in WCNs, must be evaluated for guaranteed performance and energy efficiency, as with other techniques, such as in [140].

Multiple mobile vendor collaboration could play a crucial role in realizing strategies for a cooperative EE scheme, such as those presented in [141]. Practically, barriers to mobile operator cooperation could limit the potentials of these schemes. Cross-vendor interaction in traffic offloading, as well as in energy distribution, would significantly improve network EE. A shared network infrastructure, for example, where the energy contributions by each operator are quantified for appropriate billing, presents an ideal case for vendor cooperation. Competition among vendors is detached from the infrastructure management level, making it boundlessly amenable to cooperative energy-efficient techniques.

Overall, the trends in energy-efficient techniques suggest that application-specific standards hold much promise for EE improvement. In [128, 129], for example, the energy-efficient technique presented in each case is tied to the specific context. Each one setting presents unique constraints and opportunities for which general purpose approaches may not be thoroughly efficient. For instance, Min and Chandrakasan [142], noting the uniqueness of specific applications, propose context-specific protocols to extend the frontiers of EE. Extending this pattern to all aspects of network design is crucial to EE, and we, therefore, recommend a top-down design approach for specific applications right from nodes’ design, node operation mode, data management techniques, network architecture, and protocol designs to achieve greater efficiency.

5. Conclusions

Energy inefficiency in conventional networks may be cost prohibitive in high-energy nodes and network-life-threatening in limited-energy nodes. However, the drive towards energy efficiency is a combined multidisciplinary research, spanning efficient hardware design and smart operation, efficient data communication, and low-energy processing techniques. A system built with EE as a performance metric must embody optimization at all levels and should harness the profits of cooperation as well as the unique opportunities in its application scenario.

Nomenclature

LENs:Limited energy nodes
HENs:High-energy nodes
WCNs:Wireless communication networks
WSNs:Wireless sensor networks
EE:Energy efficiency
EH:Energy harvesting
:Transmission power at each node in the network
:Transmission power change at each node due to optimization
:Computation power in the overall network
:Computation power change at each node due to optimization
:Transmission power in the overall network
:Transmission power coefficient in the overall network after optimization
:Computation power at each node in the network
:Computation power coefficient in the overall network after optimization
:Some measure of network performance
ε:Change in network performance after optimization.

Additional Points

Statement of Public Interest. Wireless communication networks are at the heart of smartly controlled systems and the idea of a globally connected world. Generally, wireless communication systems are designed to convey data from a source to a destination. As wireless systems continue to grow and evolve to accommodate upward scaling traffic requirements, energy efficiency increasingly becomes a concern. In light of prevailing environmental concerns and increasing energy demand, energy efficiency is becoming a trend in the design of myriads of energy-consuming systems across several domains, as in consumer household appliances, electric grid systems, motor vehicles, industrial processes, and machines among others. Nearly all these systems also employ wireless communication networks to either increase their performance or save energy. Therefore, the efficiency in the operation and performance of wireless communication networks is of particular interest. This paper provides a background of methods for efficient design and operation of wireless communication networks and further examines some recent energy-efficient design trends. Notably, in the case of data-intensive techniques such as data mining and artificial intelligence, which are candidate approaches to smart wireless networks of the future, we evaluate the limits of the adoption of such approaches in extant systems. We also discuss the energy supply options given the impending disruptive changes in the energy sector, which again is partly driven by environmental concerns and communication network-based smart control. We conclude highlighting the potentials of cross-vendor collaboration and a context-specific approach to wireless system design for energy efficiency of WCNs.

Conflicts of Interest

The authors declare that they have no conflicts of interest.