I. Introduction
One of the most contemporary challenges is to design efficient methods for exploiting the new technology of wireless sensor networks (WSN). A WSN consists of a large number of sensor nodes deployed over a certain area, providing real-time data about certain (target) phenomena [3], [4], [8]. Typical applications of WSN include: military operations, area surveillance, environmental and habitat monitoring, remote sensing and global awareness. The deployment of a WSN can be random (for example, dropping sensors in a hostile terrain or a disaster area) or deterministic (for example, placing sensors along a pipeline to monitor pressure and/or temperature, and boundary surveillance). However, in both cases it is required to configure the WSN into a reliable set of clusters. Each cluster has one or more cluster heads. Cluster heads relay the aggregated data to a sink node that can be a base station, or a command/end user node usually via some gateways or relay nodes (Fig. 1). To ensure high reliability and fault-tolerance, clusters of large numbers of low-cost sensors are used redundantly together with methods for information integration/fusion (aggregation) and synchronization [4], [5]. Two major factors contribute to the probability of sensor node failures: first, having limited resources and low manufacturing cost, second, the harsh nature of the WSN applications environment. Reliable monitoring of a phenomenon (or event detection) depends on the collective data provided by the target cluster of sensors and not on any individual node. This motivated the introduction of a formulation of a reliability measure for WSN in [1] as the probability that there exists an operational path from at least one operational sensor in each target cluster to the sink node (DSNREL). However, this reliability measure may not be adequate in some cases where the reliable operation of the WSN requires a minimum amount of data to be obtained from each cluster of sensors. Consider for example measuring the average temperature over a sector of a pipeline, or tracking the trajectory of a moving target. In such cases, a minimum amount of information has to be received from different sensors usually spread over different spatial regions. In this paper we propose a new measure for the reliability of WSN that reflects the cooperative operation of the sensors and flow requirement into a given sink node. We consider the network operational only if a minimum amount of data can be delivered from each target cluster to a given sink node (a cluster head or a gateway). We assume low data rates compared to the channel capacity (communication bandwidth) between nodes. We also assume a contention-free protocol (e.g. TDMA/FDMA-based protocol (Time Division Multiple Access/Frequency Division Multiple Access) [8], [12]), and that sensor nodes act as data generator as well as relay nodes. Given the location information of each node and the transmission range we can determine the topology of the DSN. We use a probabilistic graph model to represent a WSN, and using this model we investigate the complexity of the problem and present methods for computing the reliability measure. In Section II, we present the graph model, assumptions and a formulation of the problems. In Section III, we show that the problem is computationally intractable, in particular #P-hard for arbitrary networks. Section IV describes an algorithm for arbitrary WSNs. In Section V we consider special cases for which efficient algorithms that can be used for either computing or bounding the reliability are developed. In Section VI, we present some numerical results. Section VII is conclusions and future work. Clustering, cluster head, gateway and sensor nodes.