Skip to main content

Advertisement

Log in

Rechargeable sensor activation under temporally correlated events

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Wireless sensor networks are often deployed to detect “interesting events” that are bound to show some degree of temporal correlation across their occurrences. Typically, sensors are heavily constrained in terms of energy, and thus energy usage at the sensors must be optimized for efficient operation of the sensor system. A key optimization question in such systems is—how the sensor (assumed to be rechargeable) should be activated in time so that the number of interesting events detected is maximized under the typical slow rate of recharge of the sensor. In this article, we consider the activation question for a single sensor, and pose it in a stochastic decision framework. The recharge-discharge dynamics of a rechargeable sensor node, along with temporal correlations in the event occurrences makes the optimal sensor activation question very challenging. Under complete state observability, we outline a deterministic, memoryless policy that is provably optimal. For the more practical scenario, where the inactive sensor may not have complete information about the state of event occurrences in the system, we comment on the structure of the deterministic, history-dependent optimal policy. We then develop a simple, deterministic, memoryless activation policy based upon energy balance and show that this policy achieves near-optimal performance under certain realistic assumptions. Finally, we show that an aggressive activation policy, in which the sensor activates itself at every possible opportunity, performs optimally only if events are uncorrelated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Jaggi, N., Kar, K., & Krishnamurthy, A. (2007). Rechargeable sensor activation under temporally correlated events. In Proceedings of the fifth intl. symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt’07), Cyprus, Apr.

  2. Kansal, A., & Srivastava, M. B. (2005). Energy harvesting aware power management (book chapter). In Wireless sensor networks: A systems perspective. Norwood, MA: Artech House.

  3. Raghunathan, V., Kansal, A., Hsu, J., Friedman, J., & Srivastava, M. (2005). Design considerations for solar energy harvesting wireless embedded systems. In Proceedings of the 4th IEEE/ACM Intl. Conference on Information Processing in Sensor Networks (IPSN) – Special Track on Platform Tools and Design Methods for Network Embedded Sensors (SPOTS) (pp. 457–462). Los Angeles, CA, Apr.

  4. Kar, K., Krishnamurthy, A., & Jaggi, N. (2006). Dynamic node activation in networks of rechargeable sensors. IEEE/ACM Transactions on Networking, 14(1), 15–26.

    Article  Google Scholar 

  5. Jaggi, N., Krishnamurthy, A., & Kar, K. (2005). Utility maximizing node activation policies in networks of partially rechargeable sensors. In Proceedings of the 39th Annual Conference on Information Sciences and Systems (CISS), Baltimore, March.

  6. Jaggi, N. (2006). Robust threshold based sensor activation policies under spatial correlation. In Proceedings of the Fourth Intl. Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt’06) (pp. 1–8). Boston, Apr.

  7. Akyildiz, I. F., Vuran, M. C., & Akan, O. B. (2004). On exploiting spatial and temporal correlation in wireless sensor networks. In Proceedings of the Second Intl. Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt’04) (pp. 71–80). Cambridge, UK, Mar.

  8. Vuran, M. C., Akan, O. B., & Akyildiz, I. F. (2004). Spatio-temporal correlation: Theory and applications for wireless sensor networks. Elsevier Computer Networks, 45(3), 245–261.

    Article  MATH  Google Scholar 

  9. Gastpar, M., & Vitterli, M. (2003). Source-channel communication in sensor networks. In Proceedings of the Second Intl. Workshop on Information Processing in Sensor Networks (IPSN) (pp. 162–177). New York: Springer, Apr.

  10. Pattem, S., Krishnamachari, B., & Govindan, R. (2004). The impact of spatial correlation on routing with compression in wireless sensor networks. In Proceedings of ACM/IEEE International Symposium on Information Processing in Sensor Networks (IPSN) (pp. 28–35). Berkeley, CA, Apr.

  11. Jaggi, N. (2007). Node activation policies for energy-efficient coverage in rechargeable sensor systems. In PhD. thesis. Rensselaer Polytechnic Institute: http://www.ecse.rpi.edu/homepages/koushik/Thesis_Jaggi.pdf, May.

  12. Puterman, M. L. (1994). Markov decision processes – discrete stochastic dynamic programming. NJ: John Wiley and Sons.

    MATH  Google Scholar 

  13. Cassandra, A. R., Kaelbling, L. P., & Littman, M. L. (1994). Acting optimally in partially observable stochastic domains. In Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94), vol. 2 (pp. 1023–1028). Seattle, Washington: AAAI Press/MIT Press.

  14. Gaucherand, E. F., Arapostathis, A., & Marcus S. I. (1991). On the average cost optimality equation and the structure of optimal policies for partially observable markov decision processes. Annals of Operations Research, 29(1–4), 439–470.

    Article  MATH  MathSciNet  Google Scholar 

  15. Wolff, R. (1989). Stochastic modeling and the theory of queues. NJ: Prentice Hall.

    MATH  Google Scholar 

  16. Bertsekas, D. P. (2000). Dynamic programming and optimal control, volume I. Belmont, MA: Athena Scientific.

    Google Scholar 

  17. Bhat, U. N. (1984). Elements of applied stochastic processes, 2nd edn. New York: John Wiley.

    MATH  Google Scholar 

  18. Puterman, M. (2005). Markov decision processes: Discrete stochastic dynamic programming. NY: Wiley.

    Google Scholar 

  19. Littman, M. L. (1994). Memoryless policies: Theoretical limitations and practical results. In From Animals to Animats 3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior (pp. 238–245). Brighton, UK: MIT Press.

  20. Shaked, M., & Shanthikumar, J. (1994). Stochastic orders and their applications. NY: Academic Press.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neeraj Jaggi.

Additional information

This article extends the results that appeared in WiOpt 2007, Cyprus, April 2007 [1].

Appendices

Appendix I: transformation of POMDP to MDP

The state of the equivalent MDP at time t is the information vector Z t ∈ Δ (of length \(|{\mathcal{X}}|\)), whose ith component is given by, \(Z_t^{(i)} = \hbox{Pr}[X_t = i| y_t, \ldots, y_1; u_{t-1}, \ldots, u_0]; \, i \in {\mathcal{X}}.\) We have, 1′ Z t = 1, since the elements of Z t correspond to mutually exclusive events whose union is the universal set. The state Z t+1 is recursively computable given the transition probability matrices P(u), action taken u t and the observation y t+1 [14], as

$$ Z_{t+1} = \sum_{y \in {\mathcal{Y}}} \frac{\bar{Q}_y(u_t) P'(u_t)Z_t}{1'\bar{Q}_y(u_t) P'(u_t)Z_t} I[Y_{t+1} = y], $$
(29)

where I[A] denotes the indicator function of the event A and the matrices \(\bar{Q}_y(u) = \hbox{diag}\{ q_{x,y}(u) \}. 1^\prime\) denotes a row vector with all elements equal to one. The numerator in the recursive relation denotes probability of event X t+1 = i, Y t+1 = y given past actions and observations and is denoted by \(\bar{T}(y, Z_t, u_t),\) while the denominator denotes probability of event Y t+1 = y given past actions and observations and is denoted by V(y, Z t , u t ). The fraction \((\frac{\bar{T}}{V})\) is denoted W(y, Z t , u t ). {Z t } forms a completely observable controlled Markov process with state space Δ. The reward associated with the state Z ∈ Δ and action \(u \in {\mathcal{U}},\) is defined as \(\bar{r}(Z,u) = Z' [r(i, u)]_{i \in {\mathcal{X}}}.\) The optimal reward for the original POMDP is same as that of the equivalent formulated MDP [14].

Let e j denote the unit column vector with all elements equalling zero except the jth element being one. Thus, if \(u_{t-1} = 1, Z_t = e^{y_t} = e^{x_t}.\) On the other hand, if u t-1 = 0, then the observation y t = (L, ϕ), for some L such that 0 ≤ LK. Given the observation (y t = (L, ϕ)), the state of the system is either (L, 0) or (L, 1). Thus the state Z t of the equivalent MDP has a maximum of two non-zero components, and is of the form \(Z_t = \alpha_1 e^j + \alpha_2 e^{j^\prime},\) where α1 + α2 = 1, 0 ≤ α1, α2 ≤ 1, j = (L, 0), and j′ = (L, 1). The values of α1, α2 exhibit an elegant structure, as discussed below.

Recall the i-step transition probability functions F defined in Sect. 5.1.1. Let us represent \(Z_t = (1 - \hbox{F}_{E, 1 - E}^{(i)}) e^{(L,E)} + \hbox{F}_{E, 1 - E}^{(i)} e^{(L,1-E)}\) as Z t = (L, E, i).

Lemma 8

The state-space Δ is countable.

Proof

Let Z t = (L′, E′, i) for some 0 ≤ L′ ≤ K, E′ ∈ {0, 1} and integer i ≥ 0.

  • Case u t = 1: Let X t+1 = (L, E). Then y t+1 = X t+1 = (L, E). We have, Z t+1 = (L, E, 0).

  • Case u t = 0: Let X t+1 = (L, E). Then y t+1 = (L, ϕ). Let us consider the case where E′ = 0. Expanding we have, \(Z_t = (1 - \hbox{F}_{0, 1}^{(i)}) e^{(L^\prime,0)} + \hbox{F}_{0, 1}^{(i)} e^{(L^\prime,1)}.\) Using (16), we have,

    $$ \begin{aligned} Z_{t+1} = & \left[p_c^{\rm off} (1 - \hbox{F}_{0, 1}^{(i)}) + \left(1 - p_c^{\rm on}\right) \hbox{F}_{0, 1}^{(i)}\right] e^{(L, 0)} + \left[ p_c^{\rm on} \hbox{F}_{0, 1}^{(i)} + \left(1 - p_c^{\rm off}\right) (1 - \hbox{F}_{0, 1}^{(i)}) \right] e^{(L, 1)}\\= & [1 - \hbox{F}_{0, 1}^{(i+1)}] e^{(L, 0)} + \hbox{F}_{0, 1}^{(i+1)} e^{(L, 1)} = (L, 0, i+1) = (L, E', i+1).\\ \end{aligned} $$

Similarly, for the case E′ = 1, Z t+1 = (L, E′, i + 1).

Thus, Z t+1 is completely described using Z t , u t and y t+1. Assuming an initial state (and observation) of (K, 1), Z 0 = (K, 1, 0) and Z t is of the form (L, E, i), ∀t > 0. Since L, E, and i are individually countable, and since all the vectors Z ∈ Δ are of the form (L, E, i), we have the result. \(\square\)

Thus α1, α2 take on values only from the set \({\mathcal{S}},\) where

$$ {\mathcal{S}} = \{ \hbox{F}_{E, 1 - E}^{(i)}, 1 - \hbox{F}_{E, 1 - E}^{(i)} \}, \,i \ge 0, \,i \,\hbox{ integer}, E \in \{ 0, 1 \}. $$
(30)

Lemma 9

The reward function of the equivalent MDP \(\bar{r}(Z,u), \,\forall Z \in \Updelta, \,u \in {\mathcal{U}}\) belongs to the set \({\mathcal{S}},\) given by (30).

Proof

Let Z = (L, E, i). For \(u = 0, \bar{r}(Z, 0) = 0 = \hbox{F}_{1,0}^{(0)}.\) For u = 1 and \(L < \delta_1 + \delta_2, \bar{r}(Z, 1) = 0 = \hbox{F}_{1,0}^{(0)}.\) For u = 1 and L ≥ δ1 + δ2, we have the following cases:

  • \(E = 0: \bar{r}(Z,1) = [1 - \hbox{F}_{0, 1}^{(i)}] \left(1 - p_c^{\rm off}\right) + \hbox{F}_{0, 1}^{(i)} p_c^{\rm on} = \hbox{F}_{0, 1}^{(i+1)}.\)

  • \(E = 1: \bar{r}(Z,1) = [1 - \hbox{F}_{1, 0}^{(i)}] p_c^{\rm on} + \hbox{F}_{1, 0}^{(i)}\left(1 - p_c^{\rm off}\right) = 1 - \hbox{F}_{1, 0}^{(i+1)}.\)

The above equalities follow from the definition of r̄ and (16). \(\square\)

Appendix II: proofs of Lemma 2 and Lemma 3

Proof of Lemma 2

We divide the state-space into four categories and show that the optimality equations hold for the above values of h*, Γ* in all the scenarios within an error of \(O\left(\frac{1}{\rho}\right).\)

Case I (L, 0, i), 0 ≤ L < δ1 + δ2, i ≥ 0: The l.h.s. of the optimality equation equals h*((L, 0, i)) + Γ*. Only the deactivate action is feasible in this state.

$$ \begin{aligned} \hbox{r.h.s.}(0) &= \bar{r}((L, 0, i), 0) + (1 - q) h^{\ast}((L, 0, i+1)) + q h^{\ast}((L + c, 0, i+1)) \\ = & 0 + (1 - q) \alpha L + q \alpha (L + c) = \alpha L + \alpha qc \\ &= h^{\ast}((L, 0, i)) + \Upgamma^{\ast}. \end{aligned} $$

Case II (L, 1, i), 0 ≤ L < δ1 + δ2, i ≥ 0: The deactivate action is the only feasible action and similar to Case I, the optimality equation is satisfied exactly in this case.

Case III (L, 0, i), δ1 + δ2LKc, i ≥ 0: Using analysis similar to Case I, the r.h.s. in equal to αL + αqc for the deactivate action. Note that \(\bar{r}((L, 0, i), 1) = \hbox{F}_{0, 1}^{(i+1)}.\) The r.h.s. for the activate action is given by,

$$ \begin{aligned} \hbox{r.h.s.}(1) &= \hbox{F}_{0, 1}^{(i+1)} + \hbox{F}_{0, 1}^{(i+1)}(1 - q) h^{\ast}((L - \delta_1 - \delta_2, 1, 0)) + \hbox{F}_{0, 1}^{(i+1)} q h^{\ast}((L + c - \delta_1 - \delta_2, 1, 0)) \\ & + \left(1 - \hbox{F}_{0, 1}^{(i+1)}\right) q h^{\ast}((L + c - \delta_1, 0, 0)) + \left(1 - \hbox{F}_{0, 1}^{(i+1)}\right)(1 - q) h^{\ast}((L - \delta_1, 0, 0)) \\ &= \alpha L + \alpha qc + \hbox{F}_{0, 1}^{(i+1)} (1 - \alpha \delta_2) - \alpha \delta_1 \le \alpha L + \alpha qc. \end{aligned} $$

The last inequality follows from the fact that \(\hbox{F}_{0, 1}^{(i+1)} \le \pi^{\rm on}\, \forall i \ge 0,\) and \(\frac{\alpha \delta_1}{1 - \alpha \delta_2} = \frac{\pi^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}} \ge \pi^{\rm on}.\) Since the l.h.s. is αL + αqc, deactivation is optimal in this case, and the optimality equation is satisfied exactly.

Case IV (L, 1, i), δ1 + δ2LKc, i ≥ 0: Similar to Case I, the optimality equation is satisfied for the deactivate action. Note that \(\bar{r}((L, 1, i), 1) = 1 - \hbox{F}_{1, 0}^{(i+1)}.\) The r.h.s. for activate action is given by,

$$ \begin{aligned} \hbox{r.h.s.}(1) = & \left(1 - \hbox{F}_{1, 0}^{(i+1)}\right) + \left(1 - \hbox{F}_{1, 0}^{(i+1)}\right)\left[(1 - q) h^{\ast}((L - \delta_1 - \delta_2, 1, 0)) + q h^{\ast}((L + c - \delta_1 - \delta_2, 1, 0))\right] \\ & + \hbox{F}_{1, 0}^{(i+1)} q h^{\ast}((L + c - \delta_1, 0, 0)) + \hbox{F}_{1, 0}^{(i+1)}(1 - q) h^{\ast}((L - \delta_1, 0, 0)) \\ = \; & \alpha L + \alpha qc + \left(1 - \hbox{F}_{1, 0}^{(i+1)}\right) (1 - \alpha \delta_2) - \alpha \delta_1. \end{aligned} $$

Thus activation is optimal if \(\hbox{F}_{1, 0}^{(i+1)} \le \frac{1 - \alpha \delta_2 - \alpha \delta_1}{1 - \alpha \delta_2} = \frac{1 - p_c^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}}.\) Note that \(\hbox{F}_{1, 0}^{(i+1)}\) increases from \(1-p_c^{\rm on}\) to πoff as i increases from 0 to ∞. Since \(1-p_c^{\rm on} \le \frac{1 - p_c^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}} \le \pi^{\rm off},\) there exists an i′ such that activation is optimal for i < i′; deactivation is optimal otherwise. Also note that since αδ1 and (1 − α δ2) are of order \(O\left(\frac{1}{\rho}\right), \epsilon \sim O\left(\frac{1}{\rho}\right),\) i.e., ε → 0 as ρ becomes large.

Case V (L, 0, i), Kc < LK, i ≥ 0: In the optimality equation (14), the l.h.s. is α L + αqc while r.h.s.(0) = αL + αqc + αq(KLc) and \(\hbox{r.h.s.}(1) = \alpha L + \alpha qc + \hbox{F}_{0, 1}^{(i+1)} (1 - \alpha \delta_2) - \alpha \delta_1.\) Therefore, activation is optimal iff \(\hbox{F}_{0, 1}^{(i+1)} \ge \frac{\alpha \delta_1 + \alpha q (K - L - c)}{1 - \alpha \delta_2} = \frac{\pi^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}} - \frac{q \pi^{\rm on}(L + c - K)}{\delta_1(\pi^{\rm on} + 1 - p_c^{\rm on})}.\) Note that since \(\hbox{F}_{0, 1}^{(i)}\) is an increasing function of i, larger the recharge rate q, earlier the activation in this state. Moreover, similar to Case IV, the optimality equation is satisfied within an error of \(O\left(\frac{1}{\rho}\right).\)

Case VI (L, 1, i), Kc < LK, i ≥ 0: In the optimality equation (14), the l.h.s. is α L + αqc while r.h.s.(0) = αL + αqc + αq(KLc) and \(\hbox{r.h.s.}(1) = \alpha L + \alpha qc + \left(1 - \hbox{F}_{1, 0}^{(i+1)}\right) (1 - \alpha \delta_2) - \alpha \delta_1.\) Thus, activation is optimal iff \(\hbox{F}_{1, 0}^{(i+1)} \le \frac{1 - \alpha (\delta_1 + \delta_2) + \alpha q(L + c - K)}{1 - \alpha \delta_2} = \frac{1 - p_c^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}} + \frac{q \pi^{\rm on}(L + c - K)}{\delta_1(\pi^{\rm on} + 1 - p_c^{\rm on})}.\) For instance, when L = K, activation is the optimal action. Similar to Case IV, the optimality equation is satisfied within an error of \(O\left(\frac{1}{\rho}\right).\) \(\square\)

Proof of Lemma 3

From Cases IV, VI above, since \(\pi^{\rm on} \le p_c^{\rm on},\) we have \(\hbox{F}_{1, 0}^{(1)} = 1 - p_c^{\rm on} \le \frac{1 - p_c^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}}.\) Therefore, μ*((L, 1, 0)) = 1. Similarly, since \(\frac{1}{2} < p_c^{\rm on}, p_c^{\rm off} < 1,\) we have \(\hbox{F}_{0, 1}^{(1)} = 1 - p_c^{\rm off} < \frac{\pi^{\rm on}}{\pi^{\rm on} + 1 - p_c^{\rm on}}.\) Therefore, from Case III above, ∀L: δ1 + δ2LKc, μ*((L, 0, 0)) = 0. Properties (ii) and (iii) follow since the functions \(\hbox{F}_{0, 1}^{(i)}\) and \(\hbox{F}_{1, 0}^{(i)}\) are non-decreasing in i from (17), and Cases III–VI above. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jaggi, N., Kar, K. & Krishnamurthy, A. Rechargeable sensor activation under temporally correlated events. Wireless Netw 15, 619–635 (2009). https://doi.org/10.1007/s11276-007-0091-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-007-0091-0

Keywords

Navigation