MOONGA: Multi-Objective Optimization of Wireless Network Approach Based on Genetic Algorithm

In high-density wireless sensor networks, the quality of service in terms of sensing coverage, connectivity, lifetime, energy consumption and cost is closely linked to the position of the nodes in the network. Consequently, the placement of a large number of nodes while simultaneously optimizing several measurements is considered to be an NP-difficult problem. In this article, we propose a new approach to optimizing the problem of node placement. To achieve this objective, we started by studying the main approaches existing in the literature in order to identify their limits. In order to have accurate solutions, existing physical models are studied, improved, and validated with real measurements. Then, we proposed a new formulation of the deployment optimization problem as a constrained multi-objective optimization problem. This allowed us to develop an optimizer, based on the multi-objective genetic algorithm and the weighted sum optimization method, which we called MOONGA (multi-objective wireless network optimization using the genetic algorithm). This optimizer makes it possible to generate an optimal deployment according to the topology, the environment, the specifications of different applications and the preferences of the network designer users. The algorithms that we have developed and implemented within the framework of experiments carried out on test data in order to prove the effectiveness of our approach. The analysis of the results found confirm well the interest and the superiority of our proposed approach compared to main studied approaches.


I. INTRODUCTION
Nowadays, wireless sensor networks (WSNs) represent the most used solution in applications dedicated to intelligent environments, notably military, agricultural, health, surveillance, domestic, and lighting [1]- [4]. All of these applications require an optimal deployment of hundreds of wireless interconnected sensor nodes to maintain quality of service (QoS) [5]. This high density makes it difficult to find optimal node locations. In addition, depending on the type of application, WSN requires different measures to achieve its satisfaction. Generally, these metrics are counterbalanced The associate editor coordinating the review of this manuscript and approving it for publication was Chien-Fu Cheng . and conflict with each other [6]. The improvement of one can degrade the others, and for this reason, this problem cannot be solved as a single-objective optimization problem. Consequently, the optimal placement of this massive number of nodes requires to carry out many measurements what becomes a complicated problem which had turned out to be an NP-difficult problem [7]. Various works have dealt with this problem, the majority of which have been interested in outdoor environments. However, most of them have certain limits. Indoor studies often simplify deployment problems either because they do not take into account real constraints or because they use simplified models. Some approaches choose a metric to optimize, while the other metrics will be defined as constraints where some metrics will be neglected VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ at the detriment of others. To overcome these limits, precise modelling of physical phenomena and a rigorous mathematical formulation will be necessary to set up a new approach solving this problem.
In this article, we propose a new approach to optimize the placement of nodes that we have called MOONGA (Multi-Objective optimization of wireless Network Approach based on Genetic Algorithm). In our approach, we considered an exhaustive list made up of five metrics: (i) sensing coverage, (ii) k-coverage, (iii) redundancy of sensing coverage, (iv) connectivity, (v) m-connectivity and (vi) cost.
The main contributions in this article are: • Modelling and formulation of original problems: To have a holistic approach, we have modelled the deployment space in a flexible way that manages different forms and specificities of the deployment space (walls, doors, obstacles, etc.). Thus, we consider user preferences (available budget) and existing nodes (preferred or unauthorized positions). The importance of each metric is defined according to the nature of the application. The connectivity assessment is formulated according to the network topology. Mesh, star and infrastructure topologies are taken into account. In addition, sensing coverage is assessed based on the degree of sensing coverage k and the probability of sensing required. Likewise, connectivity is assessed based on radio propagations and the degree of connectivity m required • Development and implementation of a new deployment algorithm: the algorithm that we have developed and implemented is based on a multi-objective genetic algorithm and a weighted sum method. The rest of this article is organized as follows: Section II, discusses the main approaches in the literature and exposes existing physical models. The third section presents the approach proposed within the framework of this work. The choice of physical models, the used terminologies and the mathematical formulation of the problem are presented. Moreover, the proposed GA algorithm is elucidated. Section IV is reserved for experiments carried out, comparisons made between our approach and the approaches studied, as well as an analysis of the obtained results, followed by a discussion. In the last section, we summarize the work and present some future work.

II. RELATED WORKS
Recently, many approaches of WSN deployment problem with different assumptions, objectives and models have been proposed. Main studied approaches are summarized in Table. 2, in terms of objectives, constraints, used models and methods. Furthermore, the limitations of each approach are drawn. More detailed reviews of WSN deployment problems can be found in [6], [8]- [12]. The list of acronyms used in this paper is displayed in Table. 1. Most this kind of problems is often solved using meta-heuristic approaches [6], [29]- [32] and more precisely using nature-inspired algorithms. As can be seen from Table. 2, among these algorithms, GA was frequently used. It has been proven to be a perfect method to solve this kind of problems [33], [34].
Although several studies have done in this field, many limitations remain to be overcome. Some studies are specific to either the WSN application or to the deployment environment. The majority deal with outdoor environments [35]- [37]. Most often, studies in indoor environments do not consider real constraints like obstacles (walls, appliances, furniture, etc.) [19], [28], [38]. Frequently, rectangle environment shapes are modelled [20], [21]. Several works optimize placements of either sinks or sensors. Some others consider a fixed number of nodes. Other approaches are considering metrics separately [22], [39]- [41].
As can be seen from the Table 2, in terms of used models, most of the studies simplify the problem by using simplified models [16], [20], [21], [26]- [28], [39]- [41]. Whereas reducing problem complexity, by neglecting obstacles or using simplified models for sensing and connectivity, leads to inaccurate results. With the aim to obtain reliable and optimal deployment, models that describe physical phenomenons are studied in the next part of this section.
Sensing coverage of the network is an important criterion indicating its performance [6], [42]. It measures the ability of the network to control physical events and useful information. This metric depends mainly on sensing capacity of nodes. Sensing models are classified into two categories: binary sensing models and probabilistic sensing models. These two types are illustrated in Fig. 1. Most know models are summarized in Table. 3.
Let D and X be, respectively, the random variable that designs distance between the sensor and the event and the random variable, which is worth 1 if there is detection and 0 if not. We denote by p(d) = P(X = 1|D = d) the probability that an event is detected knowing that it occurred at a distance d from the sensor. Let R min and R max be respectively the sensing radius of the certainty zone (Zone 1 in    1a) and the maximum sensing radius. For probabilistic models, an event that occurs at a location in zone 1 is certainly detected. In the second zone, events sensing is described in probabilistic law as described in Table. 3. Whereas, all events in the third zone are not detected.
Network connectivity, as its capacity to transmit information [6], [48], depends on the topology of the network and the radio propagation. In any environment, radio propagation depends on the transmission power, transceiver sensibility, used antenna, frequency as well as the deployment environment. Although radio propagation in indoor environments is similar to outdoor environments, indoor propagation has some specificities and is more critical. Several obstacles are present in such environments (walls, furniture, etc.) which intensify attenuations. Indoor spaces are usually smaller. However, higher computational effort per spatial unit is required.
Radio propagation models are mainly classified into three types: deterministic, stochastic and empirical models. As for sensing models, the most used model in literature is the binary model because it facilitates analysis [49]. Deterministic  models are the most accurate ones. They simulate the real phenomenon of radio wave propagation. Although their high accuracy, they are greedy in terms of computing time. Ray-Tracing [50] and Ray Launching [51] models are the most widely known ones [52]. In these models, environment complexity highly influences the computation time. Empirical models are based on statistics and measurements. They are usually constructed after measurements in some typical environments. Regarding stochastic models use one or more random variables to model the random aspects of radio channels. They are easy to implement. Nevertheless, they have a low precision level. Rayleigh and Rice models are commonly used models [53], [54].
In literature, the most used propagation models are empirical ones more precisely the FSPL, the 1SM and the MWF model [55]. We denote by PL(d) the path loss for a distance d separating the sender and the receiver. These described models are summarized in Table. 4.

III. PROPOSED APPROACH
In order to propose an efficient approach that solves WSN deployment optimization, precise modelling of physical phenomena and rigorous mathematical formulation are required. For this purpose, we start by defining the deployment space and modelling the WSN. Then, we perform a mathematical formulation of different metrics to be optimized.
A list of symbols used in this paper is displayed in Table. 5. Notation 1: Let X be a set, and A is one of its subsets. The characteristic function of A is the function 1 A : X → {0, 1} defined by: Notation 2: We denote by O k (n i,j , n i ,j ) the function that evaluates if the line between the node n i,j and the node n i ,j is obstructed by the k th obstacle.
A. PHYSICAL MODELS AND PROBLEM FORMULATION

1) SPACE AND NETWORK MODELLING
Real environment is divided into 1m 2 squares called cells. The deployment space, denoted by C, is modelled as a set of cells. A cell, denoted by c i,j , represents the elementary entity of space. We assume that nodes can be installed at the centre of these squares (i, j). In a heterogeneous network, each node has its characteristics. We distinguish two types of nodes: sensor nodes, denoted by s i,j , and sink nodes, denoted by sk i,j . sk i,j are defined by the triplet (RX i,j , TX i,j , τ i,j ) and s i,j are defined by (R min i,j , R max i,j , RX i,j , TX i,j , τ i,j ). Any cell c i,j of the deployment space C can contain a sink node and/or a sensor node. So, formally, cell occupation is be defined by the following four possibilities: • c i,j =< sk i,j , s i,j >, when the cell contains the sink sk i,j and the sensor s i,j .
• c i,j =< sk i,j , null >, when the cell contains only the sink node sk i,j . • c i,j =< null, s i,j >, when the cell contains only the sensor node s i,j .
• c i,j =< null, null >, when the cell does not contain any node. From cells occupation, we build the set of sensors S and the set of sinks Sk. Formally, Sk and S are defined as follows: These two sets form our WSN. So, the network, denoted by N , installed in C, is formally defined by N = Sk ∪ S.
2) EXAMPLE Fig. 2 illustrates an example of a simplified deployment space. We consider that our space is 3 × 2 meters space and we have two sensors and one sink node installed. The deployment space in Fig. 2 is modelled as follows: The deployment space may contain different obstacles. These obstacles affect radio propagation and sensing capacity. An obstacle can be any object that can lead to an attenuation of the radio propagation or sensing blocking when it is crossed. Formally, an obstacle, denoted by O, is characterised by its 2D coordinates in the deployment space, its geometric shape and its attenuation in dB.

3) NETWORK SENSING COVERAGE
Network sensing coverage metric is based on the physical sensing model. In this part, physical models are discussed to choose the most precise one. Different model parameters will be determined through experimentations in order to meet up the real sensing phenomena of the used sensor. Once the model is configured, it will be used to formulate the network sensing coverage.

a: STUDY AND CHOICE OF SENSING MODEL
From the models summarized in Table 3, the binary model does not represent the real sensing capacity of sensors. Real sensors do not provide the same sensing capacity in every direction. For this reason, the probabilistic models are more accurate. From probabilistic models, we choose the Elfes model. This latter represents a more realistic sensing capacity as it takes into consideration sensing degradation in terms of distance and hardware parameters of the sensor. Moreover, it introduces the sensing uncertainty. Sensing probabilities are calculated for different distances to determine the certainty radius R min , maximum sensing radius R max and the hardware parameters γ and β of the sensor. The used motion sensor is a VMA314 PIR sensor (Fig. 3).
We varied the distance between the sensor and the events to be detected (motion) from 0.5m until the sensor becomes unable to detect the events with as a step of 0.5m. 20 measurements are taken for each distance to estimate the sensing probability determined. Table. 6 illustrates real measurements.
Referring to Table 6, which illustrates real measurements, we record the following values: R min = 4 and R max = 8.5. In order to match real measurements, γ must be equal to 0.1 and β = 2.2. Therefore, the sensing probability for Elfes models becomes: To make our solution adaptable and more flexible to application specifications, we evaluate sensing coverage according to network designer preferences. A cell in the deployment space C is supposed to be covered if its centre is covered by at least one sensor node. Let p cov , p(d(c i,j , s i ,j )) be respectively the acceptable sensing probability fixed by the network designer and the sensing probability of a cell c i,j by a sensor s i ,j based on equation (4). is the set of obstacles present in the environment and O is an obstacle in . Formally, we evaluate the capacity of the sensor s i ,j to cover the cell c i,j , based on the sensing model, as follows: According to equation (5), we evaluate the set of covered cells by the sensor s i,j , denoted by φ(s i,j ). Formally, φ(s i,j ) is given by the following function: Respectively, we define the set of sensors covering a given cell c i,j denoted by ψ(c i,j ).
The sensing coverage of the entire space C, denoted by C , is the ratio between the number of covered cells to all cells. Let C and N be respectively the set of cells and the set of all nodes where N = Sk ∪ S and C is given by the following equation: A cell can be covered by more than one sensor. In this case, we have redundant sensing coverage. Generally, this coverage redundancy must be minimised to avoid energy waste and decrease WSN cost. Keeping in mind that the sensor nodes are prone to failure, and some applications require redundant sensing coverage such as volcanic monitoring, we need to cover cells by k-sensor nodes. k is called sensing coverage degree. By ensuring a k-coverage degree, the deployment space remains covered even if k − 1 nodes fail. Let C, k be respectively the set of cells and the required sensing coverage degree. Formally, k-coverage is evaluated as follows: If k = 1, C k = C . If cells are covered by more than k sensor nodes, we have unnecessary sensing coverage or sensing coverage redundancy. For that, we evaluate sensing coverage redundancy. Let C and k be respectively the set of cells and the required sensing coverage degree. Sensing coverage redundancy, denoted by R k (C), is calculated as follows:

4) NETWORK CONNECTIVITY
Network connectivity metric depends on the radio propagation model. Accordingly, the most used models in the literature are studied via real measurements. The most precise one will be chosen and adapted, after real measurements and evaluation. The selected model will be then used for network connectivity formulation.

a: STUDY AND CHOICE OF RADIO PROPAGATION MODEL
From already presented radio propagation models in section II, we retain the most used ones in literature (FSPL, 1SM, MWF). First, real measurements of RSS and theoretical values for the retained models are carried out and compared. Experimentations are done using ESP32 development kits. This module has a Wi-Fi transceiver with 2.4 GHz frequency.
It provides a transmission power of 0 dBm, and reception sensitivity equals to −92 dBm. The hardware component is depicted in Fig. 4. Fig. 5 illustrates the 13 tested positions and installation environment. Point A is defined as the transmitter node, and all other nodes are receivers. We fixed our transmitter at the point: (1,4). Then, we acquired the RSS values for different tested positions.
The MWF model is modified to consider not only existing walls and floors but also crossed obstacles. The path loss between n i,j and n i ,j , denoted by PL(n i,j , n i ,j ), is calculated as follows: where: • PL 0 : the attenuation at reference distance d 0 = 1 meter.
• A(O k ): attenuation due to k th obstacle. It is built empirically (See Table 7.).
• G Tx , G Rx : transmitter and receiver antenna gains. It can be seen that our space contains many obstacles. In order to have more accurate results, we measured different attenuation values caused by present obstacles. In figure 5, different lines represent different walls. Bolded lines represent thick walls and circles are glass obstacles. The obtained attenuation values are given by Table. 7.
For FSPL path loss exponent η is equal to 2. In the same floor, 1SM assumes a value of η equals 4.56. For our modified   MWF model, we suppose an exponent equal to 1.8. Table. 8 illustrates the mean error (%) of tested models for each location. Fig. 6 depicts predictions of the RSS (dBm) according to models versus real measured RSS.
Referring to Table. 8 and Fig. 6, we conclude that FSPL assumes propagation in an ideal environment without considering obstacles which explain the high error values comparing VOLUME 8, 2020 to real RSS measurements. 1SM is adjusted according to empirical data, but it fails to predict the RSS. The modified MWF model presents better RSS predictions and so, it was chosen. We denote that η values are determined according to [59] recommendations to fit indoor and outdoor environment characteristics.
b: CONNECTIVITY FORMULATION WSN should ensure an efficient and reliable transmission of detected data. To avoid losing information, it is necessary to connect nodes according to network topology [48]. Connectivity is evaluated according to Received Signal Strength (RSS) calculated at the receiving node. In our study, as this measure depends on the Path Loss (PL), it will be calculated based on the modified MWF radio propagation model. Two nodes are connected if the RSS calculated at the receiver is greater than its reception sensitivity.
Let n i,j and n i ,j be respectively the sender and the receiver nodes from N (sensor or sink). TX i,j and RX i ,j are respectively the transmission power of n i,j and the reception sensitivity of n i ,j . PL(n i,j , n i ,j ) is the path loss between n i,j and n i ,j . The RSS calculated at n i ,j and sent by n i,j , denoted by RSS(n i ,j , n i,j ), is calculated as follows: Based on RSS values evaluated between different nodes, we represented our WSN as a graph G = (N , E) where: Accordingly, we define for each node its neighbours. Nodes' neighbours are the nodes present within its transmission range. Let N be our WSN and n i,j is a node in N (can be either a sensor or a sink). Formally, n i,j 's neighbours, denoted by η(n i,j ), can be defined as follows: c: TOPOLOGY FORMULATION In order to propose a generic solution, WSN connectivity evaluation will be calculated depending on the network topology expected by the WSN designer. Mesh, star and tree infrastructure topology are supported. We define as the evaluation function of connectivity. is formulated according to the expected topology. In a mesh topology, each node must have at least two neighbours. Moreover, every pair of distinct nodes has a path between them. Mesh topology is illustrated in Fig. 7.
The connectivity function for mesh topology is evaluated as follows:  In a mesh topology, it is possible to have more than one path from a node to another. This characteristic enables fault tolerance and allows decreasing WSN's loading. For that, we define, k-Mesh connectivity. k-Mesh connectivity function is as follows: Likewise, we evaluate connectivity in a star topology. As can be seen in Fig. 8, each sensor node must be connected to the sink node. Only the sink has more than one neighbour.
Let N be the set of all nodes where N = Sk ∪ S, and n i,j be a node in N . The corresponding connectivity function for star topology becomes as follows: In infrastructure topology, each sensor is connected to at least one sink and, sinks must be connected. Fig. 9 shows an example of nodes organisation in this topology.
Let N be the set of all nodes where N = Sk∪S. n i,j is a node in N . The function evaluating connectivity in an infrastructure topology is described in equation (18).

5) COST
Cost is an important metric that should be reduced as low as possible while deploying WSN [60]. Reducing the number of nodes leads to maintaining the cost-effectiveness of the WSN. Each node has a cost, including its production, deployment and maintenance. The deployment cost is calculated as the sum of purchase and installation cost of all sensors and sinks. Node cost depends on node placement and its type. The cost of the WSN is calculated by the following equation: With this formulation, the heterogeneity of nodes is considered. τ i,j depends on the node placement. Some positions are difficult to access compared to others. For example, τ i,j can be expressed in term of the height of placement position.

6) CMOOP FORMALIZATION
Already modelled metrics allow evaluating QoS of WSN. In order to find optimal WSN, our problem is formulated as a maximization and minimization problem of these metrics. In our formalization, we define a constraint related to the available budget, which is specified by the network designer. The cost of the provided solution should not exceed the specified budget. Similarly, we define constraints related to k-coverage and m-connectivity degrees. Another constraint related to graph connectivity is defined. To ensure the graph connectivity, we used the BFS algorithm [61] for exploring our graph. This function, called BFS, takes as input an arbitrary node n and the graph G = (N , E). It returns a set of all visited nodes. We denote by β(G, n) the function that evaluates if a given graph G is connected or not. Formally, β(G, n) is calculated as follows: Our CMOOP is formalised as follows: Objectives Max C (N ) , equation (8) Max C k (C) , equation (9) Min R k (C) , equation (10) Min (N ) , equation (19) Constraints These defined objectives are counterbalanced. Increasing sensing coverage requires more nodes [60]. Moreover, decreasing the sensing coverage redundancy can lead to connectivity holes. In order to address this counterbalance issue, we combine the multi-objective optimization with a second fitness function based on the weighted-sum method, which allows us to set the importance degree of each objective. VOLUME 8, 2020 S is the set of solutions, and w i is the weight of i th objective. This weight depends on user preferences which vary from one application to another. It indicates the importance of each objective in the evaluation of the final solutions.
In our approach, we combine these two optimization methods in order to have a more flexible and holistic solution.

B. PROPOSED OPTIMIZATION ALGORITHM
After validating models to be used and formulating the WSN deployment problem, the next step consists of finding the optimal solution. The algorithm that we propose is based on the genetic algorithm which has proven to be the most effective means for solving problems for which there is no exact method, or that the solution is unknown, for the resolution within a reasonable time [33], [34]. It relies on the bio-inspired processes of natural evolution. In this algorithm, possible solutions are called individuals. The set of individuals, denoted by P, forms a population. Fig. 10 illustrates the genetic algorithm process.
Initially, we start with a set of randomly created individuals. Then, we evaluate individuals, and we identify the best ones. With this approach, the best individuals survive. They are crossed and mutated to create a new generation. Old generation and the new one are challenged to have a place in the next generation. By replacing the weakest individuals, we improve the average performance level. We iterate for a defined number of generations. In a CMOOP, finding a global optimal solution is very less probable than one objective optimization problem. For that, we combine sum-weighted objective function and NSGA-II selection operator [62]. To be able to apply the genetic process, we must adapt individual representation to our problem.

1) INDIVIDUAL CODING
We represent an individual as a vector with size equals to the number of cells |C|. If we consider the deployment space illustrated in Fig. 2, the individual corresponding is to this example is represented as in Fig. 11. Each element of this vector is a gene, and it represents a cell of the deployment space C.

2) DIFFERENT OPERATORS a: INITIALISATION
In GA, the first step is to initialise the first population P 0 . A population P is defined as follows: P = Indiv 1 , Indiv 2 , . . . , Indiv |p| (23) P 0 is created randomly. In our proposed solution, initialisation consists of deploying some nodes arbitrary in the deployment space.

b: SELECTION
Selection operator consists of choosing individuals that will participate in the reproduction. Several operators exist (random selection, best selection, worst selection, roulette wheel selection, etc.). In a multi-objective problem in which we try to optimize several contradictory objectives, we will use two selection operators. NSGA-II selection operator [62] is used to select parents from the Pareto front that well participate in the next generation. These parents participate in the reproduction phase. Moreover, we used the elitist selection to find the best individual according to equation (22).

c: CROSSOVER
The general idea of the crossover process is the exchange of some genes between parents. This process helps to explore a new area of research. In our approach, we adopt the one-point and two-point crossover strategy (Fig. 12). The crossover points are chosen arbitrarily. We select R c .|P| best individuals from P where R c is the crossover rate. These individuals are crossed. Two crossed parents generate two children. In Fig. 12, we illustrate the one-point crossover process.

d: MUTATION
Mutation consists of modifying some genes. This operator helps to maintain diversity, but it can also disrupt algorithm convergence. We mutate individuals with a mutation rate R m . R m .|P| best individuals are mutated. A gene of each selected individual can be mutated with a probability P m . We have   developed two mutation operators. The first add/remove node randomly. The second mutation operator changes the position of a deployed node by adding or subtracting a random value . Proposed mutation operators are illustrated in Fig. 13 3

) HYPER-PARAMETERS OF THE PROPOSED GA
After presenting different operators, genetic parameters of the proposed algorithm must be studied in order to find the optimal solution and minimising the execution time.

a: CROSSOVER AND MUTATION RATES
The choice of R c and R m critically affect the performance and the behaviour of the algorithm. To choose the fittest probabilities, we vary R c and R m from 0 to 1 with a step of 0.1. For each couple (R c , R m ), we save the mean fitness value and the mean execution time of ten executions (10-Cross-Validation). Studied deployment space and input parameters are the same for different simulations. We run the algorithm  To choose the best couple, we compare execution time. Table. 10 shows the execution time for these couples. Accordingly, the couple (0.3, 0.6) has the lowest execution time. So, it will be chosen as it represents a trade-off between fitness value and execution time.

b: MUTATION PROBABILITY
From each population, 60% of the population P is selected for mutation. A gene of each selected individual is mutated with a probability P m . This parameter influences the convergence of the proposed algorithm. Tested probabilities and the obtained results are summarized in Fig. 15.
Referring to Fig. 15, the highest fitness value is obtained with P m = 0.15. Execution time is not highly affected by this VOLUME 8, 2020  parameter. This is due to the process of mutation. It consists of swapping two value or a simple variable assignment.

c: POPULATION SIZE
Population size is another parameter that influences GA convergence. To choose the optimal population size, we run simulations with different population sizes. We tested populations from 10 individuals to 100 individuals with a scale of 10. Fig. 16 shows the influence of population size on our algorithm.
As illustrated in Fig. 16 for population sizes higher than 40 individuals, the fitness value remains more or less stable at 0.9. In the other side, more individuals mean more GA operations (selection, crossover, mutation, evaluation) and then more execution time (s). To ensure a compromise between fitness and execution time, population size is fixed to 40.

d: MAXIMUM NUMBER OF GENERATIONS
Our algorithm converges when it reaches a maximum number of generated iterations. The execution time of the algorithm increases proportionally with this maximum number of generations. To evaluate the influence of this last parameter on the convergence of our algorithm, we vary this number from 10 to 180 generations and, for each variation, we execute the algorithm ten times. According to Fig. 17, we found that our algorithm converges when the maximum number of iterations generated is around 90 iterations. We thus considered, while remaining cautious, that the best value will be  fixed at 100 iterations. Table 11 illustrates the appropriate values of the parameters for our approach after evaluation of the parameters, execution time and iteration of convergence, which influence the optimality of our algorithm according to the solution obtained. Table. 11 summarizes the suitable parameters for our GA optimizer.
Algorithm 1 shows the pseudo-code of the proposed GA.

IV. EXPERIMENTATIONS AND ANALYSIS OF RESULTS
Before running simulation, WSN designer must specify different parameters such as the deployment space dimensions, its characteristics, node characteristics, the used protocol, the desired topology and designer preferences (importance of each metrics, preferred or eliminated positions). These informations are formatted to represent individuals which are the entry of our hybrid algorithm. Individuals are processed and evaluated according to different models until the end of iterations (Maximum number of generations). It returns the optimal solution that satisfies different constraints (available budget, k-coverage and m-connectivity degree, etc.). The simulation process is illustrated in Fig. 18. Our optimizer is developed in python. We used Python DEAP library under ''PyCharm'' development environment. Our tool is executed in a PC with an Intel Core 7-5500U, 2.4 GHz processor and 8 GB of RAM. The simulated deployment space consists of a 225m 2 area. It is a corridor in the National School of Engineering of Le Mans University (ENSIM), as shown in Fig. 19.
In the first part of this section, we evaluate the efficiency of our proposed algorithm with different deployment problem instances (topology, k-coverage, m-connectivity and sensing probability). Then, we validate the effect of weights and user preferences on the solution. In the second part, we present a   comparison of the obtained results to other works. We represent the simulation parameters in Table 12.
In the following, we evaluate different options of our tool in order to prove its effectiveness. We configure our tool to run 10 instances (10-cross validations) for each described simulation.

A. TOPOLOGY EVALUATION
For the first simulation (Sim n • 1), the objective is to monitor all events in the area of interest. Mesh topology will be adopted. The proposed solution is illustrated in Fig. 20. Our proposed tool suggests a WSN with 7 nodes. Blue dots represent sensors. For this simulation, the WSN contains 1 sink and 6 sensors. This WSN ensures 174 covered cells with 10 over-covered cells. All nodes are fully connected in a mesh topology.
In the second simulation (Sim n • 2), we configured our simulator to generate a star topology. The parameters are the same as previous simulations. The obtained solution is illustrated in Fig. 21. The red dot represents the sink node. For this simulation, 8 nodes are deployed; 1 sink and 7 sensors. This WSN ensures 175 covered cells with 3 over-covered. All nodes are fully connected.
One more topology is evaluated. This time, we evaluate the infrastructure topology (Sim n • 3). With the same parameters, the obtained solution is shown in Fig. 22.   Simulation results of the infrastructure topology show that our tool converged at an optimal solution. It ensured a high sensing coverage rate 99 (only two cells are not covered) and with a low coverage redundancy. All sensor nodes are connected to at least one sink node (AP) and sink nodes are connected.
In the fourth simulation (Sim n • 4), our WSN will be dedicated to an installed lighting network. The existing lighting system contains 13 light sources. These lighting units have predefined and unchangeable positions. Obviously, while having existing nodes, it is more economical to combine nodes when it is possible. We consider that these actuators have the same radio frequency specifications of other nodes. The obtained results are shown in Fig. 23.
In Fig. 23, green dots represent pre-installed light sources. Our tool suggests implementing 8 nodes. Node represented by a red dot is supposed to be a sink. 7 sensors are proposed to cover the area of interest. 164 cells are covered by these 7 sensor nodes and 14 cells are over-covered. In this situation, only one extra RF module is needed (6,28). In the suggested WSN, a sink node and 6 sensors are placed at the same positions of lighting nodes. Prefixed lighting nodes are included in connectivity evaluation, and the proposed WSN is fully connected. The obtained results are compatible with the previously described economic approach. Table. 13 summarizes obtained results for different simulations. Different simulations presented in Table. 13 prove the effectiveness of our proposed approach. It takes into consideration the required topology and existing nodes.

B. USER PREFERENCES EVALUATION
More experimentations were done in order to evaluate the k-coverage and m-connectivity metrics. For the following simulations (Sim n • 5 − 6), we move from 2-Mesh connectivity degree to 3-Mesh degree. We need to ensure that each node is connected in a mesh topology to at least 3 other nodes. To ensure this condition, nodes have to get closer to each other when it is possible or adding some extra nodes. Fig. 24 shows the obtained results. Results show that all nodes have at least three neighbours. In Sim n • 6, nodes are closer to each other compared to Sim n • 5) which ensured 3-Mesh degree. As a result of this rapprochement, coverage redundancy increased.
We suppose in the next simulation (Sim n • 7 − 8) that the designer requires that at least two nodes must cover each cell. For this purpose, more nodes are required. The obtained results are illustrated in Fig. 25. We can see that nodes are doubled from an average of 7.1 in Sim n • 7 to 15.1 for Sim n • 8. We can see that connectivity redundancy and sensing coverage redundancy metrics are not influenced and are almost the same, which reinforce the effectiveness of our solution.
In the next simulation, we suppose that our application has no tolerance for sensing failure. For that, we consider that a cell is covered only if its sensing probability is equal to 1 (p cov = 1). This simulation ( Sim n • 9 ) will be compared to another one where cells are covered when its sensing probability is higher than 0.7 (Sim n • 10). By minimising the sensing probability, the covered space by a single sensor increases. To avoid sensing coverage redundancy, sensor nodes will move away from each other, which may cause nodes disconnection. If that is the case, some extra nodes are needed. Fig. 26 illustrates the obtained solution. It has been found that the number of used nodes is decreased. Moreover, connectivity redundancy decreased. An extra node was deployed to ensure that node (8, 39) have a second neighbour (condition for mesh connectivity). Regarding sensing coverage, a high rate is satisfied for both simulations (Fig. 26). Table. 14 summarizes simulations results for different user preferences. These simulations indicate that our optimizer provides an efficient solution and takes into consideration different user preferences.

C. APPLICATION REQUIREMENTS
As previously described, the importance of metrics varies according to the nature of the application [5]. Accordingly, we will vary the weights of different objectives in order to evaluate weights impact on our solution. In simulations 11-13, we changed the importance of the coverage metric (w 2 ). Table. 15 depicts the results of simulations in function of weight.
We can see that while increasing w 2 from 1 to 3, the sensing coverage increases. It goes from 93.1 to 99.3 (average of 10 simulations). We notice that this also affects other metrics. To ensure higher sensing coverage, more nodes are needed, which is compatible with the obtained results. After that, in simulations 14-16, we reproduced these simulations with sensing coverage redundancy (w 4 ). We clearly remark that sensing coverage redundancy decreases with the increase of w 4 . It goes from 7.5 to 0.5. A small sensing coverage rate decrease accompanies this reduction of sensing coverage redundancy rate. VOLUME 8, 2020    Different simulations results are influenced by objective weights which make our tool adaptable to application nature. For example, in a surveillance application, high sensing coverage is required while in other applications having a low available budget, cost-weight should be higher than other ones, etc.

D. COMPARISONS
To validate our work, we compare our optimizer with [21] ( Benatia, 2017) and in [19] under the same simulation conditions. At first, we configured our tool with the same environment and characteristics as in [21]. We chose these studies because it is similar to our problem in terms of the selected criteria. In [19], the authors tested two different algorithms. One is base on MOFPA, and another is based on PSO. As described in [19], we configured our tool to generate networks with 15, 13 and 10 nodes. In other words, we specified the available budget constraints. Sim n • 17 and Sim n • 18 represent respectively simulation results of our tool for the same problems' parameters as in [21] and [19]. Simulation parameters are illustrated in Table. 16. Fig. 27 illustrates the obtained results of the reproduced environment as in [21].
As summarized in Table. 18, we ensure full connectivity comparing to 96% for [21]. Sensors cover 97% of the area of interest. In Benatia, 2017, only 91% is covered. Although more nodes are used (18 faces to 16), we maintained a low sensing coverage redundancy which confirms the optimal placement. In addition, our tool outperforms (MOFPA, 2017) and (PSO, 2017) for different metrics except in term of  connectivity. This is due to using an unreal binary connectivity model in [19]. Our obtained results are more realistic and more optimal in terms of different objectives. As can VOLUME 8, 2020  be seen in Table. 15 and 17, we conclude that our approach outperforms other approaches in terms of sensing coverage and network connectivity and these simulations are a proof of the effectiveness and flexibility of our approach in term of deployment space shape.
Further comparisons were required to prove the effectiveness of the proposed optimizer in terms of k-coverage, m-connectivity and cost. This evaluation is done in the same case study as used in [23]- [25]. These studies assume a deployment space of 300 × 300 square meter area and target to be covered are placed randomly in this space. To meet the same simulation conditions, the environment is divided into 30m 2 squares to obtain 100 potential positions, where nodes   can be deployed. Within these parameters, these studies aim to minimise the number of deployed nodes while maintaining k-coverage and m-connectivity degrees. To have a fair comparison, we switched to binary models for both sensing coverage and connectivity models as used in [23]- [25]. Simulation parameters are summarized in Table 19. Fig. 28 illustrated an example of this problem instance. We performed extensive experiments with MOONGA tool to compare it with other approaches in terms of the number of deployed sensor nodes needed to assure k-coverage and m-connectivity. We varied the values of k from 1 to 4 and, m from 1 to 3. Regarding the random aspect of different target points placement, the average of the number of deployed nodes from 10 cross-validations and the best result are reported. Fig. 29 shows the comparison between our proposed tool with the algorithms proposed in [24], [25] and [23].
From 29, it can be seen clearly that for different (k, m) combinations, our optimizer is able to place fewer nodes while ensuring full k-coverage and m-connectivity, except for (4,1) it is greater than [23], [24]. Surprisingly for these two studies, the number of nodes jumped from 42 to 47 for [24] and from 44 to 49 for [23] for the couple (4,2). This finding was surprising, and it needs further investigations. We believe that the difference in terms of the number of nodes while varying k values is more more significant compared to varying m values. This returns to the fact that the communication radius (100 m) is greater than the sensing radius (60 m). We note that a large communication range is responsible for maintaining sensors connected to each other while converging to the optimal solution.
Overall, these results indicate that our MOONGA approach presents an efficient solution to the deployment problem of WSN. It surpasses different approaches. Moreover, we conclude that our approach is a holistic one that deals with different protocols, topology, user preferences, preferred or eliminated positions, existing nodes and application specifications. Furthermore, it can assist the WSN designer in different applications by specifying the importance of each metric.

V. CONCLUSION AND FUTURE WORKS
In this article, we have proposed a new approach to solving the WSN deployment problem. We have modelled the problem as a constraint multi-objective optimization problem. Our main objective, within the framework of this work, consisted of determining the best positions of the wireless nodes and ensuring optimization of the sensing coverage, the connectivity and the cost. The limits and shortcomings that we have noted, through the study of the main approaches existing in the literature and the analyses carried out on the results found, have allowed us to propose a new approach, called MOONGA, based on the multi-objective genetic algorithm and the weighted sum optimization method. We have presented our study of the GA based algorithm with suitable chromosome representation, evaluation functions, FIGURE 29. Comparison in terms of No. of nodes. VOLUME 8, 2020 selection, crossover and mutation operations. This approach makes it possible to generate an optimal deployment based on the required topology, the environment, the specifications of the different applications and user preferences. Several scenarios of WSN with different parameters and conditions were simulated extensively. The simulation results are compared with different algorithms. The analysis of these simulations, resulting from the various experiments carried out on a set of test data, confirm the feasibility and the effectiveness of the proposed approach.
Our future work includes four directions. The first is to conduct a deeper comparative study between our approach and the main approaches studied in the literature in order to give academics and practitioners more amplification on how to optimize the problem of node placement in wireless sensor networks. The second consists in proposing a new extension of our approach to support a representation of the deployment space as a 3D environment. Undoubtedly, this extension will increase the complexity of the problem and the execution time since the evaluation process is the most demanding in terms of execution time. To remedy this problem, we plan to integrate machine learning methods to approximate different assessment functions. Furthermore, as a third direction, we are also aiming for the integration of new measures such as, for example, lifespan and energy consumption, which are essential criteria in the field of WSNs.The fourth direction consists in carrying out a computer complexity analysis of our algorithm, around which our approach is articulated, to show that our solution converges in a reasonable time. This will allow us to confirm that the added value of our results is that they are obtained without additional calculation costs. He was a Principal Investigator/Expert Researcher in several international projects. Professor Dridi, together with his research group and collaborators, has published/presented over 120 peer reviewed journal publications/communications, in the above-mentioned scientific research areas. He has been invited in different universities and given invited talks at several conferences. His research interests include development of devices for health (biosensors, microantenna based implantable microsystems), environment, agriculture and food safety (chemical sensors) as well as energy economy (LED based lighting), harvesting, and storage. His recent interest in energy economy was focused on the development of smart lighting systems for indoor and outdoor applications.