Sensor Parameter Estimation for Full-View Coverage of Camera Sensor Networks Based on Bounded Convex Region Deployment

Recently, full-view coverage was introduced to capture intruders from any direction in camera sensor networks (CSNs), it can identify intruders more effectively than traditional coverage. However, the full-view coverage usually requires a large number of camera sensors so that its estimation problem becomes more complicated. In addition, in practical application scenario, the camera sensors are often randomly deployed in an irregular bounded region. In this paper, we assume that many heterogeneous camera sensors are deployed in a bounded convex region. In order to predict the sensor parameter needed to achieve any given full-view coverage probability (ratio) of the heterogeneous CSNs, we deduce the sensor parameter estimation model to estimate the sensor scale, sensing radius and other parameters so as to guide the engineer to design a better CSNs. Finally, To evaluate the accuracy of the proposed model, a series of simulation experiments are conducted to verify the results. By analysis of the results, we can conclude that the mean absolute coverage error defined in this paper is not greater than 6.5%


I. INTRODUCTION
At present,with the advancement of camera sensor and wireless embedded processors technologies, camera sensor networks (CSNs) has witnessed a vigorous development. Compared with conventional omnidirectional sensor networks, CSNs is composed of a large number of camera sensors with adjustable sensing directions, which has the characteristics of real-time data processing, adaptive adjustment of sensing direction, recognition and tracking of targets [1], it can capture more valuable information from the environment in the form of image or videos. In addition, due to the extensive application of CSNs in the next generation medical monitoring, intelligent traffic, intelligent indoor monitoring, anti-terrorism monitoring and disaster warning [2]- [4], it has attracted many scholars to engage in relevant research.
In general, coverage is one of the fundamental issues in CSNs, which can be categorized into three types: target cov-The associate editor coordinating the review of this manuscript and approving it for publication was Noor Zaman . erage, area coverage and barrier coverage. In the traditional omnidirectional sensor networks, these coverage problems do not need to take the facing direction of target into account. However, in some specific application scenarios (such as smart city security monitoring, anti-terrorism detection, etc.), CNSs should not only need to detect the targets (intruders) entering the monitoring region, but also need to identify the targets from any direction. If the effective front image of a target can be captured by the camera sensor, the target is easier to be recognized by using image recognition algorithm. In such applications, it is important to ensure the detection of the effective front image of the target in real-time. Based on this practical requirement, Wang and Cao [5] proposed the concept of full-view coverage. As shown in Fig.1, no matter which direction the target moves, there is at least one camera senor which can detect it and capture its effective front image, it is considered that the target meets the full-view coverage.
In CSNs, due to most of the application scenarios are unreachable, we often adopt random deployment strategy which leads to the coverage quality may not be predetermined. In addition, when a large number of camera sensors are randomly deployed, it impractical to achieve the predetermined coverage quality by adjusting the positions and sensing directions of camera sensors. Therefore, in order to achieve the desired coverage quality, it is necessary to estimate the sensor related parameters (such as sensor density, sensing radius, etc) before the initial deployment. Literatures [6], [7] studied the area coverage estimation issue in CSNs, but the full-view coverage estimation problem is not taken into account, and the deployment region is assumed to be a bounded square region. However, in practical application scenarios, the monitoring region is often an irregular bounded region. Although, the literatures [5], [8], [9] studied the critical density for Full-view coverage in CSNs based on bounded square monitoring region. They did not consider the sensor parameter estimation problem for Full-view coverage in CSNs based on bounded and irregular monitoring region, and this model cannot be used to estimate the sensor parameters under any predetermined Full-view coverage ratio.
In this paper, we consider the sensor parameter estimation issue for any given full-view ratio in heterogeneous CSNs based on bounded convex region deployment. It is assumed that all targets (intruders) are located in a bounded convex region, in order to meet the predetermined full-view coverage ratio, we deduce a sensor parameter estimation model to predict the sensor scale, sensing radius and field-of-view angle before randomly initial deployment, so as to better guide the engineers to design the CSNs. To the best of our knowledge, there is no relevant works on such issues.
The main contributions of this paper are summarized as follows: • We first study the full-view estimation issue in CSNs based on bounded convex region deployment.
• We derive a sensor parameter estimation model for any given full-view coverage in heterogeneous CSNs.
• We analyze the performance and accuracy of our proposed estimation model. The rest of this paper is organized as follows. In section 2, we present the related works. In section 3, the related problem description and definitions are given. Section 4 introduces the sensor parameter estimation model for any given full-view ratio in detail. A series of simulation experiments are given in section 5. Section 5 is the conclusion.

II. RELATED WORKS
Currently, the coverage issue of camera sensor networks has attracted extensive attention from industry and academia. Most of works mainly focus on target coverage, area coverage and barrier coverage.
In the field of target coverage, Yang et al. [10] proposed a novel coverage degree (CD)-coverage model for visual sensor networks (VSNs), afterward, they presented a harmony search-based coverage-enhancing algorithm to improve the coverage for the CD-coverage problem. In [11], Maximum Coverage with Minimum Sensors (MCMS) problem for VSNs has been studied, a modified centralized greed algorithm is presented to improve target coverage with minimum sensors. In [12], authors studied the problem of minimum-node deployment of directional sensor networks using a mobile charger. Mohamadi et al. [13] proposed four learning automata-based algorithms to enhance target coverage in directional sensors networks.
In the area coverage field, literature [14] presented a dynamic programming algorithm to optimize the coverage overlaps so as to improve the area coverage. In [15], two scheduling algorithms called ECNS and EAECNS have been proposed to sleep more sensors and minimize blind cells. In [16], in order to optimize the area coverage, authors proposed a coverage-enhancing algorithm based on overlap-sense ratio. Lin et al. [17] studied area coverage problem in directional mobile sensor networks, and proposed two enhanced deployment algorithms (EDA-I and EDA-II) to achieve high area sensing coverage ratio.
In order to better guide the deployment of CSNs, In literatures [6], [7], researchers focus on the issue of area coverage estimation, and also studied the target coverage estimation problem. By using the propose model, we can predict the coverage ratio after the CSNs have been randomly deployed.
The above works mainly focus on the traditional coverage problem. In order to better recognize the facing direction of target at any time, Wang and Cao [5], [8] first proposed the definition of full-view coverage in CSNs, and deduced the essential conditions for a target or subregion to be full-view covered. At the same time, the full-view coverage estimation of homogeneous CSNs based on equilateral triangle deployment is studied, but the full-view coverage estimation of heterogeneous CSNs under arbitrary coverage and the boundary effect are not considered. Hu et al. [9] proposed a model to calculate the critical sensing radius for full-view coverage ratio equal to 1 in CSNs under 2-dimensional random walk movement. However, the research cannot be directly applied to estimate sensors density for any given full-view coverage ratio, and the boundary effect is also not taken into account. Gan et al. [18] considered the issue of asymptomatic full-view coverage of mobile heterogeneous 97130 VOLUME 9, 2021 CSNs, and revealed the critical requirements for achieving asymptomatic full-view coverage. Yu et al. [19] presented a new concept of local face-view barrier coverage, and deduced the local face-view barrier coverage estimation model of CSNs under deterministic deployment. In [20], the full-view barrier problem of homogeneous CNSs has been analyzed. The target region was divided into different subregions, and each subregion was covered by the same sensors. By constructing the graph of subregion and its adjacent subregions, the problem was abstracted as the shortest path problem. In addition, authors also studied the deterministic deployment strategy to achieve full-view coverage with minimum camera sensors.
In [21], authors studied minimum sensor full-view coverage problem in homogeneous CSNs. By dividing the target region into full-view and non-full-view subregions, the weighted graph was constructed. Finally, the algorithm of Dijkstra is used to get the full-view barrier cover set with minimum sensor. Aiming at the problem of full-view barrier coverage enhancement in homogeneous CSNs, literature [22] proposed a distributed algorithm to adjust the sensing direction of sensors, then to choose near-optimal sensors to construct full-view barrier cover set. Gan et al. [23] proposed a new distributed algorithm to enhance the full-view coverage of homogeneous CSNs. In [24], the problem of full-view coverage with minimum sensor in CSNs was studied, and it is proved that the full-view coverage can be transformed into target full-view coverage issue. In addition, a centralized and distributed greedy algorithms are proposed to construct minimum full-view cover set ensure full-view coverage of a given region. Literatures [25], [26] studied the fairness based full-view coverage maximization problem, and proposed algorithms to schedule the orientations of camera sensors to maximize the minimum cumulative full-view coverage time of target points.
All the above works assumed that all target are located in a bounded square region, none of them take the sensor parameter estimation problem for any given full-view coverage ratio of heterogeneous CSNs based on bounded irregular monitoring region into account.

III. PROBLEM DESCRIPTION AND DEFINITIONS
In this section, we present the sensing model, deployment scenario used in this paper. In addition, we introduce the problem description and relevant definition of Full-view coverage in CSNs.

A. SENSING MODEL
In general, we use a 4-tuple < s, r, ϕ, − → v > to represent the two-dimensional sensing model of a camera sensor. As illustrated in Fig.2, where s, r and ϕ denote the position of camera sensor, sensing radius and field-of-view (Fov) angle respectively. The sensing direction of a camera sensor is represented as − → v . We call the sector-disk determined by < s, r, ϕ, − → v > as sensing region of a camera sensor s, denoted as |s|.

B. HETEROGENEITY OF SENSORS
In order to better describe the heterogeneity of camera sensors, we divide the camera sensors into u groups where N is the total number of camera sensors in CSNs and ω i is a constant, called density weight. Clearly, 0 < ω i < 1 and u i=1 ω i = 1. All camera sensors in group G i has the same sensing radius r i and Fov angle ϕ i . In addition, camera sensors in different groups have different sensing radius and Fov angle.

C. DEPLOYMENT SCENARIO
In this paper, we assume that all targets are located in a bounded convex region denoted as R, and |R| denotes the area of the bounded convex region. Aiming to monitor the targets, we randomly scattered a lot of heterogeneous camera sensors with different sensor density λ, sensing radius r, and Fov angle ϕ to achieve a desired full-view quality. When a great number of heterogeneous camera sensors are stochastically deployed in the bounded convex region R. Some camera sensors located in the boundary of the region R may generate boundary effect, due to the sensing range of some of them may fall outside of the region R. In order to eliminate the boundary effect, we extend the bounded convex region R as a new region denoted as ER. As illustrated in Fig.3, the boundary line of the convex region R extends outward distance d = u i=1 r i u to construct the extended bound convex region ER. The extend banding shape can be approximated as a rectangle, its height is d and width is the perimeter of the bounded convex region R, we can conclude that the area of the extended bounded convex region ER is approximately expressed as follow.

D. DEFINITIONS AND THEOREM
In order to make the full-view coverage issue more clearer, we proposed the following definitions and theorem.

1) DETECTION REGION
We define the detection region of target t as a circle region centered at t with sensing radius. Therefore, for different camera sensors in different groups, given a target t, its  detection region is also different. We use D(t) i to denote the detection region of target t for camera sensors in group G i , its area can be express as |D(t) i | = π r 2 i , as shown in Fig.4.

2) MAXIMUM DETECTION REGION
The maximum detection region of target t is defined as As illustrated in Fig.5, − → tf is the current facing direction of target t in the bounded convex region R. If the target is covered by at least one camera sensor and the angle between its facing direction tf and its viewed direction − → ts is not greater than 2θ , we call that the target achieves θ -Coverage. Here, θ ∈ (0, π/2) called effective angle is a predefined constant parameter. The target is called θ -Coverage, the following conditions must be satisfied: • |st| ≤ r. It means that the camera sensor s is located in the detection region of target t It means that the angle between anti-viewed direction st of target t and orientation of camera sensor s is not greater than ϕ/2.
It means that the angle between viewed direction ts and facing direction tf of the target t is not greater than 2θ .

4) FULL-VIEW COVERAGE
A target t is called full-view covered if and only if, for any facing direction − → tf of it, the target always achieves θ -Coverage.

5) CIRCLE LIST VECTOR SET
It is assumed that a great number of heterogeneous camera sensors which are categorized as u groups are deployed in the extended bounded convex region ER and a target t is located in the bounded convex region R. We use CL(t) to denote the circle list vector set, which is defined as the sequence formed by all camera sensors which cover target t in the maximum detection region D(t) M rotating in clockwise or counterclockwise. Fig 6 shows an example of circle list vector sets of target t, it can be expressed as We assume that the angle between − → ts v i and − − → ts v i+1 is larger than 2θ for some i. Then, consider a vector − → d along the bisector of the angle. we can conclude that  the angle between − → tf and − → ts v i or the angle between − → tf and − − → ts v i+1 is not greater than θ . Thus, − → tf is full-view covered.

E. PROBLEM DESCRIPTION
To make issue more tractable, we make the following assumptions: • All targets are located in the bounded convex region R.
• All camera sensors are randomly deployed in the extended bounded convex region ER without any obstacles.
• Camera sensors in the same group have the same sensing radius, field-of-view.
• Camera sensors in different groups have different sensing radius, field-of-view.

1) PROBLEM DESCRIPTION
Before the initial deployment of heterogeneous CSNs, in order to achieve a given Full-view quality, how to determine the parameters of heterogeneous camera sensors (such as sensor scale, sensing radius and Fov angle), so as to better guide engineers to design CSNs.

F. MAIN SYMBOLS
In order to describe the problem more clearly, the main symbols used in this paper are summarized in Table 1.

IV. SENSOR PARAMETER ESTIMATION MODEL FOR FULL-VIEW COVERAGE IN HETEROGENEOUS CSNs WITH BOUNDED CONVEX DEPLOYMENT
In this section, we mainly concentrate on the derivation of sensor parameter estimation model. This model can be used to predict the sensor scale, sensing radius and Fov angle, so as to guide engineers to optimize the design and initial deployment of CSNs. We use A i represents the event that target t in the bounded convex R is covered by camera sensors in group G i deployed in the extended bounded convex region ER. It is obvious that the probability of event A i can be expressed as follow.
Here, πr 2 i |ER| with i = 1, 2, . . . , u represents the probability of the camera sensors in group G i located in the detection region D(t) i ; ϕ i 2π with i = 1, 2, . . . , u represents the probability of the sector sensing region of camera sensor in group G i facing towards t.
Let B k i i represents the event that the target t is exactly covered by k i number of camera sensors in group G i . We can conclude that the event B k i i follows Bernoulli distribution. The probability of this event can be computed by the following formula.
Lemma 1: We can conclude that P(B k i i ) approximately obeys Poisson distribution with strength λ i |D(t) i |q i . That is, lim Where, λ i = N i |ER| denotes the sensor density of the camera sensors in group G i ; q i = ϕ i 2π denotes the probability of the sector sensing region of the camera sensor in group G i facing towards target t. Proof: Lemma 1 is proved.

Proof:
We use Taylor formula to expand e qx , get that Given that By comparing the coefficients of x K , we get that Based on the above deduction, it is easy to conclude that Lemma 2 is proved. Lemma 3: Let C k denotes the event that target t is exactly covered by K = k 1 + k 2 + . . . + k u number of camera sensors from different groups, where, k i denotes the number of camera sensors in group G i . According to Lemma 1 and Lemma2, we can conclude that P( Theorem 2: We use F t to represent the event that the target t is full-view covered by the heterogeneous camera sensors randomly deployed in the extended bounded convex region ER. Based on the Theorem 1 and Lemma 3, we can get the derivation that the probability of event F t is expressed as following formula. where, π/θ represents the minimum number of heterogeneous sensors in the maximum detection region D(t) M ; E K denotes the event that perimeter of a circle with unit length is covered by K uniformly distributed arc segments with length θ/π . According to literature [5], we know that the probability of event E K is expressed as follow.
In the actual deployment, if we know the total number of heterogeneous camera sensors and the parameters of sensors in each group G i . By using the Theorem 2, we can calculate the actual full-view coverage after the initial deployment. Similarly, in order to achieve a desired full-view coverage, we can also get the parameters of sensors through the Theorem 2 and simulation experiments.
The total sensors density and the density of each group of sensors are respectively denoted as λ, λ i , i = 1, . . . , u, which can be calculated by the following formulas: Example 1: Given that N 1 N 2 = 0.6, we assume that a large number of two groups heterogeneous camera sensors are scattered in the extended bounded convex region ER and 20 targets are located in the bounded convex region R. For simplicity, it is assumed that the bounded convex region R is a square with its side length is 100m. The relevant parameters of this scenario are set as follows: effective angle θ is ranged between 30 o and 60 o with incremental step 10, the area of R is approximately 100, r 1 = 10, r 2 = 15, ϕ 1 = 60 o , ϕ 2 = 45 o . Thus, we obtain the extended region ER = 125 * 125. In order to achieve full-view coverage ratio P is ranged between 0.6 and 0.9 with incremental step 0.1, according to the Theorem 2 and Equations (8)(9), we obtain the numerical relationship among sensor density λ, effective angle θ and full-view coverage probability P. as shown in Table 2.

V. PERFORMANCE EVALUATION
In this section, we use Matlab 2018a to establish the experiment simulation scenario. It is assumed that a great number of heterogeneous camera sensors from different groups are randomly scattered in the extended bounded convex region in which there is no any obstacles, and all targets are located in the bounded convex region. Meanwhile, in order to simply the experiment, we do not take the diameter of the target into account. Besides, aiming to obtain more accurate experimental results, a series of simulation experiments are conducted to validate the results and each group of simulation runs m = 200 times. The average is called the scenario simulation mean result (SSMR) which is calculated by the following formula.
where, C i , 1 ≤ i ≤ m is the i th simulation result of scenario. In order to better evaluate the accuracy and error of the model, we propose the concept of mean absolute coverage error (MACE) expressed as follow.
where, | i | = |C i − P|, 1 ≤ i ≤ m represents the absolute error between the i th simulation result and the theoretical result of scenario.
In our simulations, we consider three metrics in performance evaluations: sensor scale, sensing radius and fov angle. To simply these experiments, as shown in Fig.7, we implement matlab 2018a to construct a bounded convex region R with its area and perimeter approximately 5078 square meters and 260 meters, and assume that all heterogeneous camera 97134 VOLUME 9, 2021 sensors are randomly deployed in the extended bounded convex region ER which its area can be calculated by Equation (1). Meanwhile, it assumed that the number of targets located in the bounded convex region is 200. Besides, we only carry out simulation analysis on two kinds of heterogeneous CSNs (u = 2, G 1 , G 2 and u = 3, G 1 , G 2 , G 3 ), and assume that the scale of each group in heterogeneous CSNs is equal (N i = N j , i = j). In order to describe the heterogeneity of CSNs more clearly, we define the heterogeneous proportion of HR 1 and HR 2 . According to the heterogeneous proportion, if we know the sensing radius (r 1 ) and fov angle (ϕ 1 ) of G 1 camera sensors, the sensing radius and fov angle of G 2 and G 3 camera sensors are respectively calculated by r 2 = HR 1 * r 1 , ϕ 2 = HR 1 * ϕ 1 ; r 3 = HR 2 * r 1 , ϕ 3 = HR 2 * ϕ 1 . In addition, the effective angle is set as θ = 45 o or θ = 60 o . The mainly parameters used and default values are listed in Table 3   TABLE 3. Mainly Simulation Parameters.

A. EFFECT OF SENSOR SCALE
In this group of experiments, we mainly make an analysis of impact of sensor scale on the full-view coverage probability, and compare mean absolute coverage error (MACE) between theoretical results and simulation results.
In this kind of CSNs, two groups of camera sensors (G 1 , G 2 ) are deployed in the extended bounded convex region, where the number of G 1 camera sensors is set as N 1 = 200 with increment step 50 to the maximum N 1 = 700, the sensing radius and fov angle of G 1 camera sensors are respectively set as r 1 = 8, ψ 1 = 90 o . According to the heterogeneous proportion HR 1 , the sensing radius and fov angle of G 2 camera sensors are respectively calculated by r 2 = HR 1 * r 1 , ϕ 2 = HR 1 * ϕ 1 . Fig.8(a) and Fig.8(b) indicate the trend of the full-view coverage and mean absolute coverage error (MACE) with the number of G 1 and G 2 , respectively. It is observed that the full-view coverage increases as the number of G 1 and G 2 increase and the MACE between simulation and theoretical results is less than 3.5% under different number of camera sensors. Under the same number of G 1 and G 2 camera sensors, when the effective angle increases, it means that fewer camera sensors are needed to archive full-view coverage, which leads to the increase of full-view coverage. For example, in the Fig.8(a), when the total number of sensors is 800, the full-view coverage rate is approximately equal to 65% under effective angle θ = 60 o . While the full-view coverage rate is approximately equal to 23% under effective angle θ = 45 o .

2) u = 3
Three groups of camera sensors (G 1 , G 2 , G 3 ) are deployed in the extended bounded convex region. The parameters of G 1 and G 2 groups are set as the same as experiment scenario u = 2. We can obtain the sensing radius and fov angle of G 3 camera sensors by using heterogeneous proportion HR 2 , where r 3 = HR 2 * r 1 , ϕ 3 = HR 2 * ϕ 1 . Fig.8(c) and Fig.8(d) show the trend of the full-view coverage and mean absolute coverage error (MACE) under different the number of G 1 , G 2 and G 3 , respectively. The analysis results are similar to Fig.8(a) and Fig.8(b).

B. EFFECT OF SENSING RADIUS
In this group of experiments, we study the impact of sensing radius on the full-view coverage rate, and analyze the MACE.
In this CSNs, a fixed number of G 1 and G 2 camera sensors (N 1 = N 2 = 600) are randomly deployed in the extended VOLUME 9, 2021 bounded convex region, where the sensing radius of G 1 is set as r 1 = 3 with increment step 1 to the maximum r 1 = 10, the fov angle ϕ 1 of G 1 is set as ϕ 1 = 90 o . Based on the heterogeneous proportion, the fov angle ϕ 2 of G 2 is calculated by ϕ 2 = HR 1 * ϕ 1 and the sensing radius r 2 of G 2 is set as r 2 = HR 1 * r 1 .
From the trends of Fig.9(a) and Fig.9(b), we observe that the full-view rate increases as the sensing radius increases and the MACE is not greater than 6% under different sensing radius and effective angle. When the other parameters of G 1 and G 2 camera sensors are fixed constants, the larger the sensing radius of camera sensors, the larger sensing range of camera sensors, so as to the full-view coverage rate increases. Meanwhile, when the effective angle increases, it means that the fewer camera sensors are required to achieve full-view coverage, so that the full-view coverage rate increases.

2) u = 3
In order to further verify the effect of sensing radius on full-view coverage, in this experiment scenario, we deploy three groups of camera sensors (G 1 , G 2 , G 3 ) in the extended bounded convex region. The parameters of G 1 and G 2 groups are set as the same as experiment scenario u = 2. In addition, the number of G 3 is set as the same as G 1 and G 2 , the fov angle ϕ 3 and sensing radius r 3 of G 3 camera sensors can be calculated by using the heterogeneous proportion HR 2 , where r 3 = HR 2 * r 1 , ϕ 3 = HR 2 * ϕ 1 .
It is can be seen from Fig.9(c) and Fig.9(d) that the trends of the full-view coverage and mean absolute coverage error (MACE) under different sensing radius are similar to Fig.9(a) and Fig.9(b).

C. EFFECT OF FOV ANGLE
In this group of experiments, we study the impact of fov angle on the full-view coverage probability, and analyze the MACE.

1) u = 2
In this CSNs, the same number of G 1 and G 2 camera sensors (N 1 = N 2 = 400) are randomly deployed in the extended bounded convex region, where the fov angle of G 1 is set as ϕ 1 = 60 o with increment step 10 o to the maximum ϕ 1 = 120 o , the sensing radius r 1 of G 1 is set as r 1 = 8. By using the heterogeneous proportion, the fov angle ϕ 2 of G 2 is calculated by ϕ 2 = HR 1 * ϕ 1 and the sensing radius r 2 of G 2 is set as r 2 = HR 1 * r 1 .
As shown in Fig.10(a) and Fig.10(b), according to the trends, we can conclude that the full-view coverage rate increases as the fov angle increases, and the MACE is not greater than 3% under different fov angle. When the other parameters of G 1 and G 2 camera sensors are fixed constants, the fov angle of camera sensors is larger, sensing range of camera sensors increases so that the full-view coverage rate increases. Meanwhile, when the effective angle increases, it means that the fewer camera sensors are required to achieve full-view coverage, so that the full-view coverage rate increases.

2) u = 3
In order to better verify the effect of fov angle on full-view coverage, in this experiment scenario, we deploy three groups of camera sensors (G 1 , G 2 , G 3 ) in the extended bounded convex region. The parameters of G 1 and G 2 groups are set as the same as experiment scenario u = 2. In addition, the number of G 3 is set as the same as G 1 and G 2 , the fov angle ϕ 3 and sensing radius r 3 of G 3 camera sensors can be calculated by using the heterogeneous proportion HR 2 , where r 3 = HR 2 * r 1 , ϕ 3 = HR 2 * ϕ 1 .
It is can be seen from Fig.10(c) and Fig.10(d) that the trends of the full-view coverage and mean absolute coverage error (MACE) under different fov angle are similar to Fig.10(a) and Fig.10(b).

VI. CONCLUSION
In this paper, we assume that many heterogeneous camera sensors are deployed in a bounded convex region, and derived a sensor parameter estimation model to estimate the number of sensors, sensing radius and fov angle before initial deployment so as to guide engineers to design a better CSN. In order to demonstrate the accuracy and the performance of our presented model, a series of experiments are conducted. In the end, we make analysis on the presented estimation model results and experiments results. the comparison results show that the proposed model have a great helpful for engineers to design CSN before initial deployment. However, there are other interesting issues worth studying in the future, such as full-view coverage enhancement issue in mobile heterogeneous CSNs and full-view prediction issue in mobile heterogeneous CSNs based obstacle deployment.